Test Report: KVM_Linux_crio 20315

                    
                      b15a094293fe6765e372e2dddd744fc5f5e61b59:2025-02-14:38357
                    
                

Test fail (11/321)

x
+
TestAddons/parallel/Ingress (151.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-371781 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-371781 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-371781 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1a8f5dee-4e76-40b6-8543-4ddcdad0735f] Pending
helpers_test.go:344: "nginx" [1a8f5dee-4e76-40b6-8543-4ddcdad0735f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1a8f5dee-4e76-40b6-8543-4ddcdad0735f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002734923s
I0214 20:47:33.977885  250783 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-371781 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.485793219s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-371781 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.67
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-371781 -n addons-371781
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 logs -n 25: (1.294268675s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-168650                                                                     | download-only-168650 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| delete  | -p download-only-068536                                                                     | download-only-068536 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| delete  | -p download-only-168650                                                                     | download-only-168650 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-490503 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |                     |
	|         | binary-mirror-490503                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:36107                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-490503                                                                     | binary-mirror-490503 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| addons  | enable dashboard -p                                                                         | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |                     |
	|         | addons-371781                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |                     |
	|         | addons-371781                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-371781 --wait=true                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:46 UTC | 14 Feb 25 20:46 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:46 UTC | 14 Feb 25 20:47 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | -p addons-371781                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-371781 ip                                                                            | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-371781 ssh cat                                                                       | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | /opt/local-path-provisioner/pvc-75fb9ef7-af9b-4cf9-a68e-f5a0c0cc3c43_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-371781 addons disable                                                                | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-371781 ssh curl -s                                                                   | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-371781 addons                                                                        | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:47 UTC | 14 Feb 25 20:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-371781 ip                                                                            | addons-371781        | jenkins | v1.35.0 | 14 Feb 25 20:49 UTC | 14 Feb 25 20:49 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 20:44:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 20:44:31.978158  251426 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:44:31.978256  251426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:31.978265  251426 out.go:358] Setting ErrFile to fd 2...
	I0214 20:44:31.978270  251426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:31.978422  251426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 20:44:31.979100  251426 out.go:352] Setting JSON to false
	I0214 20:44:31.979967  251426 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5216,"bootTime":1739560656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:44:31.980068  251426 start.go:140] virtualization: kvm guest
	I0214 20:44:32.027762  251426 out.go:177] * [addons-371781] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 20:44:32.099147  251426 notify.go:220] Checking for updates...
	I0214 20:44:32.183046  251426 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 20:44:32.256455  251426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:44:32.339789  251426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:44:32.410726  251426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:44:32.412069  251426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 20:44:32.413237  251426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 20:44:32.414529  251426 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:44:32.444820  251426 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 20:44:32.445943  251426 start.go:304] selected driver: kvm2
	I0214 20:44:32.445957  251426 start.go:908] validating driver "kvm2" against <nil>
	I0214 20:44:32.445969  251426 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 20:44:32.446612  251426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:32.446707  251426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 20:44:32.461616  251426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 20:44:32.461671  251426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 20:44:32.461948  251426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 20:44:32.461982  251426 cni.go:84] Creating CNI manager for ""
	I0214 20:44:32.462026  251426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:44:32.462035  251426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 20:44:32.462087  251426 start.go:347] cluster config:
	{Name:addons-371781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-371781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:44:32.462181  251426 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:32.463586  251426 out.go:177] * Starting "addons-371781" primary control-plane node in "addons-371781" cluster
	I0214 20:44:32.464556  251426 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 20:44:32.464598  251426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 20:44:32.464609  251426 cache.go:56] Caching tarball of preloaded images
	I0214 20:44:32.464700  251426 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 20:44:32.464713  251426 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 20:44:32.465023  251426 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/config.json ...
	I0214 20:44:32.465044  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/config.json: {Name:mk808822be953ef7dcbb7de701ff2f866c1b3ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:44:32.465227  251426 start.go:360] acquireMachinesLock for addons-371781: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 20:44:32.465281  251426 start.go:364] duration metric: took 38.222µs to acquireMachinesLock for "addons-371781"
	I0214 20:44:32.465302  251426 start.go:93] Provisioning new machine with config: &{Name:addons-371781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:addons-371781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 20:44:32.465354  251426 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 20:44:32.466855  251426 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0214 20:44:32.466992  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:44:32.467034  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:44:32.480014  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0214 20:44:32.480557  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:44:32.481243  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:44:32.481270  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:44:32.481609  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:44:32.481802  251426 main.go:141] libmachine: (addons-371781) Calling .GetMachineName
	I0214 20:44:32.481949  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:32.482118  251426 start.go:159] libmachine.API.Create for "addons-371781" (driver="kvm2")
	I0214 20:44:32.482180  251426 client.go:168] LocalClient.Create starting
	I0214 20:44:32.482219  251426 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 20:44:32.731321  251426 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 20:44:33.075522  251426 main.go:141] libmachine: Running pre-create checks...
	I0214 20:44:33.075544  251426 main.go:141] libmachine: (addons-371781) Calling .PreCreateCheck
	I0214 20:44:33.075994  251426 main.go:141] libmachine: (addons-371781) Calling .GetConfigRaw
	I0214 20:44:33.076418  251426 main.go:141] libmachine: Creating machine...
	I0214 20:44:33.076434  251426 main.go:141] libmachine: (addons-371781) Calling .Create
	I0214 20:44:33.076575  251426 main.go:141] libmachine: (addons-371781) creating KVM machine...
	I0214 20:44:33.076602  251426 main.go:141] libmachine: (addons-371781) creating network...
	I0214 20:44:33.077698  251426 main.go:141] libmachine: (addons-371781) DBG | found existing default KVM network
	I0214 20:44:33.078421  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:33.078271  251448 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I0214 20:44:33.078470  251426 main.go:141] libmachine: (addons-371781) DBG | created network xml: 
	I0214 20:44:33.078499  251426 main.go:141] libmachine: (addons-371781) DBG | <network>
	I0214 20:44:33.078511  251426 main.go:141] libmachine: (addons-371781) DBG |   <name>mk-addons-371781</name>
	I0214 20:44:33.078530  251426 main.go:141] libmachine: (addons-371781) DBG |   <dns enable='no'/>
	I0214 20:44:33.078544  251426 main.go:141] libmachine: (addons-371781) DBG |   
	I0214 20:44:33.078555  251426 main.go:141] libmachine: (addons-371781) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0214 20:44:33.078569  251426 main.go:141] libmachine: (addons-371781) DBG |     <dhcp>
	I0214 20:44:33.078581  251426 main.go:141] libmachine: (addons-371781) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0214 20:44:33.078593  251426 main.go:141] libmachine: (addons-371781) DBG |     </dhcp>
	I0214 20:44:33.078601  251426 main.go:141] libmachine: (addons-371781) DBG |   </ip>
	I0214 20:44:33.078610  251426 main.go:141] libmachine: (addons-371781) DBG |   
	I0214 20:44:33.078640  251426 main.go:141] libmachine: (addons-371781) DBG | </network>
	I0214 20:44:33.078654  251426 main.go:141] libmachine: (addons-371781) DBG | 
	I0214 20:44:33.083555  251426 main.go:141] libmachine: (addons-371781) DBG | trying to create private KVM network mk-addons-371781 192.168.39.0/24...
	I0214 20:44:33.146021  251426 main.go:141] libmachine: (addons-371781) DBG | private KVM network mk-addons-371781 192.168.39.0/24 created
	I0214 20:44:33.146053  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:33.146017  251448 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:44:33.146067  251426 main.go:141] libmachine: (addons-371781) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781 ...
	I0214 20:44:33.146084  251426 main.go:141] libmachine: (addons-371781) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 20:44:33.146222  251426 main.go:141] libmachine: (addons-371781) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 20:44:33.490799  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:33.490646  251448 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa...
	I0214 20:44:33.600237  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:33.600128  251448 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/addons-371781.rawdisk...
	I0214 20:44:33.600268  251426 main.go:141] libmachine: (addons-371781) DBG | Writing magic tar header
	I0214 20:44:33.600282  251426 main.go:141] libmachine: (addons-371781) DBG | Writing SSH key tar header
	I0214 20:44:33.600296  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:33.600263  251448 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781 ...
	I0214 20:44:33.600401  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781
	I0214 20:44:33.600430  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781 (perms=drwx------)
	I0214 20:44:33.600446  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 20:44:33.600457  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 20:44:33.600467  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:44:33.600482  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 20:44:33.600497  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 20:44:33.600514  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home/jenkins
	I0214 20:44:33.600538  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 20:44:33.600549  251426 main.go:141] libmachine: (addons-371781) DBG | checking permissions on dir: /home
	I0214 20:44:33.600560  251426 main.go:141] libmachine: (addons-371781) DBG | skipping /home - not owner
	I0214 20:44:33.600569  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 20:44:33.600576  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 20:44:33.600582  251426 main.go:141] libmachine: (addons-371781) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 20:44:33.600591  251426 main.go:141] libmachine: (addons-371781) creating domain...
	I0214 20:44:33.601603  251426 main.go:141] libmachine: (addons-371781) define libvirt domain using xml: 
	I0214 20:44:33.601630  251426 main.go:141] libmachine: (addons-371781) <domain type='kvm'>
	I0214 20:44:33.601641  251426 main.go:141] libmachine: (addons-371781)   <name>addons-371781</name>
	I0214 20:44:33.601653  251426 main.go:141] libmachine: (addons-371781)   <memory unit='MiB'>4000</memory>
	I0214 20:44:33.601666  251426 main.go:141] libmachine: (addons-371781)   <vcpu>2</vcpu>
	I0214 20:44:33.601678  251426 main.go:141] libmachine: (addons-371781)   <features>
	I0214 20:44:33.601683  251426 main.go:141] libmachine: (addons-371781)     <acpi/>
	I0214 20:44:33.601687  251426 main.go:141] libmachine: (addons-371781)     <apic/>
	I0214 20:44:33.601694  251426 main.go:141] libmachine: (addons-371781)     <pae/>
	I0214 20:44:33.601698  251426 main.go:141] libmachine: (addons-371781)     
	I0214 20:44:33.601704  251426 main.go:141] libmachine: (addons-371781)   </features>
	I0214 20:44:33.601709  251426 main.go:141] libmachine: (addons-371781)   <cpu mode='host-passthrough'>
	I0214 20:44:33.601719  251426 main.go:141] libmachine: (addons-371781)   
	I0214 20:44:33.601725  251426 main.go:141] libmachine: (addons-371781)   </cpu>
	I0214 20:44:33.601730  251426 main.go:141] libmachine: (addons-371781)   <os>
	I0214 20:44:33.601736  251426 main.go:141] libmachine: (addons-371781)     <type>hvm</type>
	I0214 20:44:33.601741  251426 main.go:141] libmachine: (addons-371781)     <boot dev='cdrom'/>
	I0214 20:44:33.601746  251426 main.go:141] libmachine: (addons-371781)     <boot dev='hd'/>
	I0214 20:44:33.601760  251426 main.go:141] libmachine: (addons-371781)     <bootmenu enable='no'/>
	I0214 20:44:33.601779  251426 main.go:141] libmachine: (addons-371781)   </os>
	I0214 20:44:33.601785  251426 main.go:141] libmachine: (addons-371781)   <devices>
	I0214 20:44:33.601789  251426 main.go:141] libmachine: (addons-371781)     <disk type='file' device='cdrom'>
	I0214 20:44:33.601799  251426 main.go:141] libmachine: (addons-371781)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/boot2docker.iso'/>
	I0214 20:44:33.601807  251426 main.go:141] libmachine: (addons-371781)       <target dev='hdc' bus='scsi'/>
	I0214 20:44:33.601812  251426 main.go:141] libmachine: (addons-371781)       <readonly/>
	I0214 20:44:33.601818  251426 main.go:141] libmachine: (addons-371781)     </disk>
	I0214 20:44:33.601824  251426 main.go:141] libmachine: (addons-371781)     <disk type='file' device='disk'>
	I0214 20:44:33.601831  251426 main.go:141] libmachine: (addons-371781)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 20:44:33.601838  251426 main.go:141] libmachine: (addons-371781)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/addons-371781.rawdisk'/>
	I0214 20:44:33.601847  251426 main.go:141] libmachine: (addons-371781)       <target dev='hda' bus='virtio'/>
	I0214 20:44:33.601851  251426 main.go:141] libmachine: (addons-371781)     </disk>
	I0214 20:44:33.601856  251426 main.go:141] libmachine: (addons-371781)     <interface type='network'>
	I0214 20:44:33.601863  251426 main.go:141] libmachine: (addons-371781)       <source network='mk-addons-371781'/>
	I0214 20:44:33.601868  251426 main.go:141] libmachine: (addons-371781)       <model type='virtio'/>
	I0214 20:44:33.601872  251426 main.go:141] libmachine: (addons-371781)     </interface>
	I0214 20:44:33.601878  251426 main.go:141] libmachine: (addons-371781)     <interface type='network'>
	I0214 20:44:33.601883  251426 main.go:141] libmachine: (addons-371781)       <source network='default'/>
	I0214 20:44:33.601890  251426 main.go:141] libmachine: (addons-371781)       <model type='virtio'/>
	I0214 20:44:33.601895  251426 main.go:141] libmachine: (addons-371781)     </interface>
	I0214 20:44:33.601902  251426 main.go:141] libmachine: (addons-371781)     <serial type='pty'>
	I0214 20:44:33.601906  251426 main.go:141] libmachine: (addons-371781)       <target port='0'/>
	I0214 20:44:33.601914  251426 main.go:141] libmachine: (addons-371781)     </serial>
	I0214 20:44:33.601926  251426 main.go:141] libmachine: (addons-371781)     <console type='pty'>
	I0214 20:44:33.601933  251426 main.go:141] libmachine: (addons-371781)       <target type='serial' port='0'/>
	I0214 20:44:33.601939  251426 main.go:141] libmachine: (addons-371781)     </console>
	I0214 20:44:33.601946  251426 main.go:141] libmachine: (addons-371781)     <rng model='virtio'>
	I0214 20:44:33.601952  251426 main.go:141] libmachine: (addons-371781)       <backend model='random'>/dev/random</backend>
	I0214 20:44:33.601956  251426 main.go:141] libmachine: (addons-371781)     </rng>
	I0214 20:44:33.601960  251426 main.go:141] libmachine: (addons-371781)     
	I0214 20:44:33.601965  251426 main.go:141] libmachine: (addons-371781)     
	I0214 20:44:33.601970  251426 main.go:141] libmachine: (addons-371781)   </devices>
	I0214 20:44:33.601979  251426 main.go:141] libmachine: (addons-371781) </domain>
	I0214 20:44:33.602001  251426 main.go:141] libmachine: (addons-371781) 
	I0214 20:44:33.607432  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:7f:e6:91 in network default
	I0214 20:44:33.607932  251426 main.go:141] libmachine: (addons-371781) starting domain...
	I0214 20:44:33.607948  251426 main.go:141] libmachine: (addons-371781) ensuring networks are active...
	I0214 20:44:33.607968  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:33.608525  251426 main.go:141] libmachine: (addons-371781) Ensuring network default is active
	I0214 20:44:33.608890  251426 main.go:141] libmachine: (addons-371781) Ensuring network mk-addons-371781 is active
	I0214 20:44:33.609392  251426 main.go:141] libmachine: (addons-371781) getting domain XML...
	I0214 20:44:33.610005  251426 main.go:141] libmachine: (addons-371781) creating domain...
	I0214 20:44:34.112844  251426 main.go:141] libmachine: (addons-371781) waiting for IP...
	I0214 20:44:34.113698  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:34.113990  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:34.114123  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:34.114005  251448 retry.go:31] will retry after 200.975564ms: waiting for domain to come up
	I0214 20:44:34.316321  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:34.316682  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:34.316713  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:34.316635  251448 retry.go:31] will retry after 252.193378ms: waiting for domain to come up
	I0214 20:44:34.569958  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:34.570402  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:34.570456  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:34.570384  251448 retry.go:31] will retry after 325.936368ms: waiting for domain to come up
	I0214 20:44:34.897950  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:34.898391  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:34.898423  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:34.898337  251448 retry.go:31] will retry after 412.718748ms: waiting for domain to come up
	I0214 20:44:35.312620  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:35.313025  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:35.313077  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:35.312982  251448 retry.go:31] will retry after 630.280951ms: waiting for domain to come up
	I0214 20:44:35.944736  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:35.945073  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:35.945114  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:35.945063  251448 retry.go:31] will retry after 768.931663ms: waiting for domain to come up
	I0214 20:44:36.716189  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:36.716577  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:36.716605  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:36.716538  251448 retry.go:31] will retry after 779.547961ms: waiting for domain to come up
	I0214 20:44:37.499210  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:37.499681  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:37.499725  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:37.499633  251448 retry.go:31] will retry after 1.345577234s: waiting for domain to come up
	I0214 20:44:38.847135  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:38.847565  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:38.847595  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:38.847535  251448 retry.go:31] will retry after 1.362302652s: waiting for domain to come up
	I0214 20:44:40.212127  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:40.212481  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:40.212536  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:40.212480  251448 retry.go:31] will retry after 2.008184279s: waiting for domain to come up
	I0214 20:44:42.221863  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:42.222312  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:42.222341  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:42.222266  251448 retry.go:31] will retry after 1.873079696s: waiting for domain to come up
	I0214 20:44:44.097608  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:44.097949  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:44.097980  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:44.097910  251448 retry.go:31] will retry after 3.392098513s: waiting for domain to come up
	I0214 20:44:47.491936  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:47.492449  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:47.492520  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:47.492444  251448 retry.go:31] will retry after 4.17071552s: waiting for domain to come up
	I0214 20:44:51.664337  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:51.664653  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find current IP address of domain addons-371781 in network mk-addons-371781
	I0214 20:44:51.664671  251426 main.go:141] libmachine: (addons-371781) DBG | I0214 20:44:51.664624  251448 retry.go:31] will retry after 5.038447213s: waiting for domain to come up
	I0214 20:44:56.704209  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.704636  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has current primary IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.704656  251426 main.go:141] libmachine: (addons-371781) found domain IP: 192.168.39.67
	I0214 20:44:56.704669  251426 main.go:141] libmachine: (addons-371781) reserving static IP address...
	I0214 20:44:56.705026  251426 main.go:141] libmachine: (addons-371781) DBG | unable to find host DHCP lease matching {name: "addons-371781", mac: "52:54:00:f5:de:60", ip: "192.168.39.67"} in network mk-addons-371781
	I0214 20:44:56.772449  251426 main.go:141] libmachine: (addons-371781) reserved static IP address 192.168.39.67 for domain addons-371781
	I0214 20:44:56.772471  251426 main.go:141] libmachine: (addons-371781) waiting for SSH...
	I0214 20:44:56.772482  251426 main.go:141] libmachine: (addons-371781) DBG | Getting to WaitForSSH function...
	I0214 20:44:56.775047  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.775448  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:56.775473  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.775691  251426 main.go:141] libmachine: (addons-371781) DBG | Using SSH client type: external
	I0214 20:44:56.775721  251426 main.go:141] libmachine: (addons-371781) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa (-rw-------)
	I0214 20:44:56.775751  251426 main.go:141] libmachine: (addons-371781) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 20:44:56.775761  251426 main.go:141] libmachine: (addons-371781) DBG | About to run SSH command:
	I0214 20:44:56.775776  251426 main.go:141] libmachine: (addons-371781) DBG | exit 0
	I0214 20:44:56.898117  251426 main.go:141] libmachine: (addons-371781) DBG | SSH cmd err, output: <nil>: 
	I0214 20:44:56.898315  251426 main.go:141] libmachine: (addons-371781) KVM machine creation complete
	I0214 20:44:56.898649  251426 main.go:141] libmachine: (addons-371781) Calling .GetConfigRaw
	I0214 20:44:56.899241  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:56.899456  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:56.899594  251426 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 20:44:56.899609  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:44:56.900778  251426 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 20:44:56.900812  251426 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 20:44:56.900822  251426 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 20:44:56.900833  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:56.903052  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.903419  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:56.903443  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:56.903627  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:56.903812  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:56.903942  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:56.904063  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:56.904190  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:56.904405  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:56.904416  251426 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 20:44:57.005378  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 20:44:57.005398  251426 main.go:141] libmachine: Detecting the provisioner...
	I0214 20:44:57.005409  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.007521  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.007828  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.007857  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.007947  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:57.008130  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.008286  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.008417  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:57.008552  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:57.008711  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:57.008722  251426 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 20:44:57.110862  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 20:44:57.110925  251426 main.go:141] libmachine: found compatible host: buildroot
	I0214 20:44:57.110936  251426 main.go:141] libmachine: Provisioning with buildroot...
	I0214 20:44:57.110945  251426 main.go:141] libmachine: (addons-371781) Calling .GetMachineName
	I0214 20:44:57.111142  251426 buildroot.go:166] provisioning hostname "addons-371781"
	I0214 20:44:57.111163  251426 main.go:141] libmachine: (addons-371781) Calling .GetMachineName
	I0214 20:44:57.111311  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.113303  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.113652  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.113681  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.113786  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:57.113965  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.114146  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.114310  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:57.114461  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:57.114619  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:57.114654  251426 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-371781 && echo "addons-371781" | sudo tee /etc/hostname
	I0214 20:44:57.234241  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-371781
	
	I0214 20:44:57.234260  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.236624  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.237021  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.237053  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.237193  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:57.237379  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.237532  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.237689  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:57.237850  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:57.238001  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:57.238017  251426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-371781' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-371781/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-371781' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 20:44:57.345870  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 20:44:57.345895  251426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 20:44:57.345923  251426 buildroot.go:174] setting up certificates
	I0214 20:44:57.345935  251426 provision.go:84] configureAuth start
	I0214 20:44:57.345948  251426 main.go:141] libmachine: (addons-371781) Calling .GetMachineName
	I0214 20:44:57.346154  251426 main.go:141] libmachine: (addons-371781) Calling .GetIP
	I0214 20:44:57.348248  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.348503  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.348533  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.348622  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.350539  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.350832  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.350857  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.350969  251426 provision.go:143] copyHostCerts
	I0214 20:44:57.351042  251426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 20:44:57.351163  251426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 20:44:57.351229  251426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 20:44:57.351279  251426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.addons-371781 san=[127.0.0.1 192.168.39.67 addons-371781 localhost minikube]
	I0214 20:44:57.677029  251426 provision.go:177] copyRemoteCerts
	I0214 20:44:57.677106  251426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 20:44:57.677127  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.679424  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.679702  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.679728  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.679869  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:57.680035  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.680238  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:57.680375  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:44:57.760830  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 20:44:57.784169  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 20:44:57.806176  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 20:44:57.827985  251426 provision.go:87] duration metric: took 482.037494ms to configureAuth
	I0214 20:44:57.828005  251426 buildroot.go:189] setting minikube options for container-runtime
	I0214 20:44:57.828159  251426 config.go:182] Loaded profile config "addons-371781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:44:57.828281  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:57.830722  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.831021  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:57.831046  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:57.831239  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:57.831405  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.831597  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:57.831716  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:57.831849  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:57.832036  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:57.832057  251426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 20:44:58.044136  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 20:44:58.044158  251426 main.go:141] libmachine: Checking connection to Docker...
	I0214 20:44:58.044167  251426 main.go:141] libmachine: (addons-371781) Calling .GetURL
	I0214 20:44:58.045478  251426 main.go:141] libmachine: (addons-371781) DBG | using libvirt version 6000000
	I0214 20:44:58.047588  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.047954  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.047989  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.048126  251426 main.go:141] libmachine: Docker is up and running!
	I0214 20:44:58.048141  251426 main.go:141] libmachine: Reticulating splines...
	I0214 20:44:58.048150  251426 client.go:171] duration metric: took 25.565957287s to LocalClient.Create
	I0214 20:44:58.048187  251426 start.go:167] duration metric: took 25.566064s to libmachine.API.Create "addons-371781"
	I0214 20:44:58.048201  251426 start.go:293] postStartSetup for "addons-371781" (driver="kvm2")
	I0214 20:44:58.048214  251426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 20:44:58.048237  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:58.048455  251426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 20:44:58.048478  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:58.050428  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.050687  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.050726  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.050899  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:58.051063  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:58.051196  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:58.051311  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:44:58.132292  251426 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 20:44:58.136251  251426 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 20:44:58.136273  251426 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 20:44:58.136334  251426 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 20:44:58.136360  251426 start.go:296] duration metric: took 88.152867ms for postStartSetup
	I0214 20:44:58.136386  251426 main.go:141] libmachine: (addons-371781) Calling .GetConfigRaw
	I0214 20:44:58.136844  251426 main.go:141] libmachine: (addons-371781) Calling .GetIP
	I0214 20:44:58.138716  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.139008  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.139037  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.139241  251426 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/config.json ...
	I0214 20:44:58.139399  251426 start.go:128] duration metric: took 25.67403547s to createHost
	I0214 20:44:58.139420  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:58.141298  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.141623  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.141649  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.141752  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:58.141956  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:58.142100  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:58.142259  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:58.142384  251426 main.go:141] libmachine: Using SSH client type: native
	I0214 20:44:58.142573  251426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0214 20:44:58.142584  251426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 20:44:58.242921  251426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739565898.221124364
	
	I0214 20:44:58.242944  251426 fix.go:216] guest clock: 1739565898.221124364
	I0214 20:44:58.242950  251426 fix.go:229] Guest: 2025-02-14 20:44:58.221124364 +0000 UTC Remote: 2025-02-14 20:44:58.1394103 +0000 UTC m=+26.199933013 (delta=81.714064ms)
	I0214 20:44:58.242990  251426 fix.go:200] guest clock delta is within tolerance: 81.714064ms
	I0214 20:44:58.242999  251426 start.go:83] releasing machines lock for "addons-371781", held for 25.777706463s
	I0214 20:44:58.243024  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:58.243242  251426 main.go:141] libmachine: (addons-371781) Calling .GetIP
	I0214 20:44:58.245667  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.245991  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.246012  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.246197  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:58.246644  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:58.246823  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:44:58.246921  251426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 20:44:58.246969  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:58.247018  251426 ssh_runner.go:195] Run: cat /version.json
	I0214 20:44:58.247042  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:44:58.249593  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.249618  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.249886  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.249918  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:58.249939  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.250002  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:58.250146  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:58.250329  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:58.250344  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:44:58.250534  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:58.250555  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:44:58.250714  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:44:58.250724  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:44:58.250857  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:44:58.327244  251426 ssh_runner.go:195] Run: systemctl --version
	I0214 20:44:58.346817  251426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 20:44:58.502876  251426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 20:44:58.508890  251426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 20:44:58.508945  251426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 20:44:58.525892  251426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 20:44:58.525907  251426 start.go:495] detecting cgroup driver to use...
	I0214 20:44:58.525955  251426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 20:44:58.542514  251426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 20:44:58.555653  251426 docker.go:217] disabling cri-docker service (if available) ...
	I0214 20:44:58.555704  251426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 20:44:58.568372  251426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 20:44:58.580883  251426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 20:44:58.694358  251426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 20:44:58.831124  251426 docker.go:233] disabling docker service ...
	I0214 20:44:58.831189  251426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 20:44:58.844561  251426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 20:44:58.856858  251426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 20:44:58.983115  251426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 20:44:59.111262  251426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 20:44:59.124410  251426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 20:44:59.142155  251426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 20:44:59.142217  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.152048  251426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 20:44:59.152101  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.161943  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.171595  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.181211  251426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 20:44:59.191153  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.200989  251426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.217291  251426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 20:44:59.227040  251426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 20:44:59.235794  251426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 20:44:59.235840  251426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 20:44:59.247264  251426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 20:44:59.256177  251426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 20:44:59.380414  251426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 20:44:59.473739  251426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 20:44:59.473820  251426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 20:44:59.478505  251426 start.go:563] Will wait 60s for crictl version
	I0214 20:44:59.478560  251426 ssh_runner.go:195] Run: which crictl
	I0214 20:44:59.482339  251426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 20:44:59.522269  251426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 20:44:59.522349  251426 ssh_runner.go:195] Run: crio --version
	I0214 20:44:59.549079  251426 ssh_runner.go:195] Run: crio --version
	I0214 20:44:59.576309  251426 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 20:44:59.577295  251426 main.go:141] libmachine: (addons-371781) Calling .GetIP
	I0214 20:44:59.579970  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:59.580314  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:44:59.580343  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:44:59.580552  251426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0214 20:44:59.584380  251426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 20:44:59.597322  251426 kubeadm.go:875] updating cluster {Name:addons-371781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-371781 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 20:44:59.597425  251426 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 20:44:59.597464  251426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 20:44:59.630016  251426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 20:44:59.630086  251426 ssh_runner.go:195] Run: which lz4
	I0214 20:44:59.633778  251426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 20:44:59.637719  251426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 20:44:59.637747  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 20:45:00.917822  251426 crio.go:462] duration metric: took 1.284058577s to copy over tarball
	I0214 20:45:00.917925  251426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 20:45:02.979542  251426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061571469s)
	I0214 20:45:02.979586  251426 crio.go:469] duration metric: took 2.061733453s to extract the tarball
	I0214 20:45:02.979597  251426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 20:45:03.017295  251426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 20:45:03.058690  251426 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 20:45:03.058709  251426 cache_images.go:84] Images are preloaded, skipping loading
	I0214 20:45:03.058718  251426 kubeadm.go:926] updating node { 192.168.39.67 8443 v1.32.1 crio true true} ...
	I0214 20:45:03.058832  251426 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-371781 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-371781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 20:45:03.058918  251426 ssh_runner.go:195] Run: crio config
	I0214 20:45:03.107003  251426 cni.go:84] Creating CNI manager for ""
	I0214 20:45:03.107022  251426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:45:03.107033  251426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 20:45:03.107067  251426 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-371781 NodeName:addons-371781 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 20:45:03.107240  251426 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-371781"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.67"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 20:45:03.107319  251426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 20:45:03.117152  251426 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 20:45:03.117211  251426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 20:45:03.126526  251426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0214 20:45:03.142612  251426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 20:45:03.158492  251426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0214 20:45:03.173954  251426 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0214 20:45:03.177419  251426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.67	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 20:45:03.188748  251426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 20:45:03.306762  251426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 20:45:03.322328  251426 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781 for IP: 192.168.39.67
	I0214 20:45:03.322344  251426 certs.go:194] generating shared ca certs ...
	I0214 20:45:03.322360  251426 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.322486  251426 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 20:45:03.364122  251426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt ...
	I0214 20:45:03.364147  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt: {Name:mk64069a76e8f4878511d2e03b225bb11546421f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.364904  251426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key ...
	I0214 20:45:03.364917  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key: {Name:mk02c039e791ef88b45a8e01846acc405e67a550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.364992  251426 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 20:45:03.487745  251426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt ...
	I0214 20:45:03.487762  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt: {Name:mkcecd8486a15d17ad40a15379673a6157cc0793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.487876  251426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key ...
	I0214 20:45:03.487886  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key: {Name:mk02f4e4d2982198347fef5cdf46073aacd57c42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.487948  251426 certs.go:256] generating profile certs ...
	I0214 20:45:03.488013  251426 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.key
	I0214 20:45:03.488028  251426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt with IP's: []
	I0214 20:45:03.563813  251426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt ...
	I0214 20:45:03.563833  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: {Name:mkf1d2265ffc0fba9524be75a1abb76d164e4b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.563969  251426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.key ...
	I0214 20:45:03.563982  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.key: {Name:mk2cbe9e6261fc0f7b466e38896a85393df546fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.564071  251426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key.2c04ce5c
	I0214 20:45:03.564090  251426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt.2c04ce5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.67]
	I0214 20:45:03.637917  251426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt.2c04ce5c ...
	I0214 20:45:03.637933  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt.2c04ce5c: {Name:mkd754024e67ae38a88b79e56c91a6e74c3f635c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.638051  251426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key.2c04ce5c ...
	I0214 20:45:03.638065  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key.2c04ce5c: {Name:mkc627212af0b3d61206a94bf6f5bd4dfe90f8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.638755  251426 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt.2c04ce5c -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt
	I0214 20:45:03.638833  251426 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key.2c04ce5c -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key
	I0214 20:45:03.638878  251426 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.key
	I0214 20:45:03.638896  251426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.crt with IP's: []
	I0214 20:45:03.712353  251426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.crt ...
	I0214 20:45:03.712371  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.crt: {Name:mk05659b425c06bbc0c85595d8bbe8d162de2783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.712489  251426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.key ...
	I0214 20:45:03.712502  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.key: {Name:mkcbeccb6e8e557684c149d8aaa3493e30d84c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:03.712683  251426 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 20:45:03.712715  251426 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 20:45:03.712737  251426 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 20:45:03.712760  251426 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 20:45:03.713300  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 20:45:03.737952  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 20:45:03.759624  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 20:45:03.781344  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 20:45:03.803197  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 20:45:03.824746  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 20:45:03.846369  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 20:45:03.867669  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 20:45:03.891732  251426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 20:45:03.915554  251426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 20:45:03.932514  251426 ssh_runner.go:195] Run: openssl version
	I0214 20:45:03.938097  251426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 20:45:03.949831  251426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 20:45:03.954325  251426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 20:45:03.954358  251426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 20:45:03.959992  251426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 20:45:03.972277  251426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 20:45:03.976322  251426 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 20:45:03.976371  251426 kubeadm.go:392] StartCluster: {Name:addons-371781 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-371781 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:45:03.976438  251426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 20:45:03.976470  251426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 20:45:04.035735  251426 cri.go:89] found id: ""
	I0214 20:45:04.035798  251426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 20:45:04.048257  251426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 20:45:04.058465  251426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 20:45:04.067587  251426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 20:45:04.067603  251426 kubeadm.go:157] found existing configuration files:
	
	I0214 20:45:04.067645  251426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 20:45:04.076388  251426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 20:45:04.076435  251426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 20:45:04.085341  251426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 20:45:04.093831  251426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 20:45:04.093874  251426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 20:45:04.102727  251426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 20:45:04.111293  251426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 20:45:04.111332  251426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 20:45:04.120175  251426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 20:45:04.128868  251426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 20:45:04.128913  251426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 20:45:04.138048  251426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 20:45:04.294567  251426 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 20:45:13.401674  251426 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 20:45:13.401756  251426 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 20:45:13.401848  251426 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 20:45:13.401975  251426 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 20:45:13.402103  251426 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 20:45:13.402196  251426 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 20:45:13.403464  251426 out.go:235]   - Generating certificates and keys ...
	I0214 20:45:13.403551  251426 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 20:45:13.403607  251426 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 20:45:13.403703  251426 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 20:45:13.403802  251426 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 20:45:13.403893  251426 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 20:45:13.403972  251426 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 20:45:13.404042  251426 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 20:45:13.404211  251426 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-371781 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0214 20:45:13.404281  251426 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 20:45:13.404438  251426 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-371781 localhost] and IPs [192.168.39.67 127.0.0.1 ::1]
	I0214 20:45:13.404534  251426 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 20:45:13.404594  251426 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 20:45:13.404631  251426 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 20:45:13.404675  251426 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 20:45:13.404723  251426 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 20:45:13.404771  251426 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 20:45:13.404824  251426 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 20:45:13.404874  251426 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 20:45:13.404931  251426 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 20:45:13.404994  251426 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 20:45:13.405058  251426 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 20:45:13.406139  251426 out.go:235]   - Booting up control plane ...
	I0214 20:45:13.406216  251426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 20:45:13.406298  251426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 20:45:13.406374  251426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 20:45:13.406470  251426 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 20:45:13.406561  251426 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 20:45:13.406600  251426 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 20:45:13.406722  251426 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 20:45:13.406812  251426 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 20:45:13.406859  251426 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.398893ms
	I0214 20:45:13.406929  251426 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 20:45:13.406983  251426 kubeadm.go:310] [api-check] The API server is healthy after 5.001134832s
	I0214 20:45:13.407120  251426 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 20:45:13.407247  251426 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 20:45:13.407298  251426 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 20:45:13.407454  251426 kubeadm.go:310] [mark-control-plane] Marking the node addons-371781 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 20:45:13.407504  251426 kubeadm.go:310] [bootstrap-token] Using token: 7d9xan.mvzjp77ug2lj8vf0
	I0214 20:45:13.408600  251426 out.go:235]   - Configuring RBAC rules ...
	I0214 20:45:13.408689  251426 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 20:45:13.408782  251426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 20:45:13.408934  251426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 20:45:13.409037  251426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 20:45:13.409128  251426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 20:45:13.409198  251426 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 20:45:13.409291  251426 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 20:45:13.409330  251426 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 20:45:13.409372  251426 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 20:45:13.409378  251426 kubeadm.go:310] 
	I0214 20:45:13.409473  251426 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 20:45:13.409490  251426 kubeadm.go:310] 
	I0214 20:45:13.409591  251426 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 20:45:13.409601  251426 kubeadm.go:310] 
	I0214 20:45:13.409620  251426 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 20:45:13.409672  251426 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 20:45:13.409718  251426 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 20:45:13.409729  251426 kubeadm.go:310] 
	I0214 20:45:13.409772  251426 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 20:45:13.409778  251426 kubeadm.go:310] 
	I0214 20:45:13.409815  251426 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 20:45:13.409820  251426 kubeadm.go:310] 
	I0214 20:45:13.409859  251426 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 20:45:13.409934  251426 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 20:45:13.409988  251426 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 20:45:13.409994  251426 kubeadm.go:310] 
	I0214 20:45:13.410057  251426 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 20:45:13.410128  251426 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 20:45:13.410134  251426 kubeadm.go:310] 
	I0214 20:45:13.410197  251426 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7d9xan.mvzjp77ug2lj8vf0 \
	I0214 20:45:13.410276  251426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 20:45:13.410294  251426 kubeadm.go:310] 	--control-plane 
	I0214 20:45:13.410300  251426 kubeadm.go:310] 
	I0214 20:45:13.410367  251426 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 20:45:13.410375  251426 kubeadm.go:310] 
	I0214 20:45:13.410480  251426 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7d9xan.mvzjp77ug2lj8vf0 \
	I0214 20:45:13.410582  251426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 20:45:13.410594  251426 cni.go:84] Creating CNI manager for ""
	I0214 20:45:13.410600  251426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:45:13.411651  251426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 20:45:13.412535  251426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 20:45:13.422941  251426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0214 20:45:13.439895  251426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 20:45:13.439986  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:13.440020  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-371781 minikube.k8s.io/updated_at=2025_02_14T20_45_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=addons-371781 minikube.k8s.io/primary=true
	I0214 20:45:13.572781  251426 ops.go:34] apiserver oom_adj: -16
	I0214 20:45:13.572899  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:14.073512  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:14.573001  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:15.073659  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:15.573002  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:16.073960  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:16.574002  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:17.073044  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:17.573572  251426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 20:45:17.668941  251426 kubeadm.go:1105] duration metric: took 4.22900744s to wait for elevateKubeSystemPrivileges
	I0214 20:45:17.668999  251426 kubeadm.go:394] duration metric: took 13.69262759s to StartCluster
	I0214 20:45:17.669030  251426 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:17.669187  251426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:45:17.669732  251426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 20:45:17.669985  251426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 20:45:17.670011  251426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 20:45:17.670071  251426 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0214 20:45:17.670195  251426 addons.go:69] Setting yakd=true in profile "addons-371781"
	I0214 20:45:17.670209  251426 addons.go:69] Setting inspektor-gadget=true in profile "addons-371781"
	I0214 20:45:17.670212  251426 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-371781"
	I0214 20:45:17.670229  251426 addons.go:69] Setting default-storageclass=true in profile "addons-371781"
	I0214 20:45:17.670245  251426 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-371781"
	I0214 20:45:17.670249  251426 addons.go:238] Setting addon inspektor-gadget=true in "addons-371781"
	I0214 20:45:17.670247  251426 addons.go:69] Setting storage-provisioner=true in profile "addons-371781"
	I0214 20:45:17.670261  251426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-371781"
	I0214 20:45:17.670264  251426 config.go:182] Loaded profile config "addons-371781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:45:17.670272  251426 addons.go:69] Setting metrics-server=true in profile "addons-371781"
	I0214 20:45:17.670284  251426 addons.go:238] Setting addon metrics-server=true in "addons-371781"
	I0214 20:45:17.670290  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.670301  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.670265  251426 addons.go:238] Setting addon storage-provisioner=true in "addons-371781"
	I0214 20:45:17.670336  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.670381  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.670455  251426 addons.go:69] Setting gcp-auth=true in profile "addons-371781"
	I0214 20:45:17.670476  251426 mustload.go:65] Loading cluster: addons-371781
	I0214 20:45:17.670659  251426 config.go:182] Loaded profile config "addons-371781": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:45:17.670786  251426 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-371781"
	I0214 20:45:17.670789  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.670803  251426 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-371781"
	I0214 20:45:17.670810  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.670816  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.670815  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.670824  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.670853  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.670862  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.670933  251426 addons.go:69] Setting registry=true in profile "addons-371781"
	I0214 20:45:17.670953  251426 addons.go:238] Setting addon registry=true in "addons-371781"
	I0214 20:45:17.670979  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.671018  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.671053  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.671193  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.671231  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.671355  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.671394  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.671398  251426 addons.go:238] Setting addon yakd=true in "addons-371781"
	I0214 20:45:17.671429  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.671493  251426 addons.go:69] Setting ingress=true in profile "addons-371781"
	I0214 20:45:17.671513  251426 addons.go:238] Setting addon ingress=true in "addons-371781"
	I0214 20:45:17.671525  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.671529  251426 addons.go:69] Setting ingress-dns=true in profile "addons-371781"
	I0214 20:45:17.671544  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.671549  251426 addons.go:238] Setting addon ingress-dns=true in "addons-371781"
	I0214 20:45:17.671553  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.671580  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.671915  251426 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-371781"
	I0214 20:45:17.671943  251426 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-371781"
	I0214 20:45:17.671949  251426 out.go:177] * Verifying Kubernetes components...
	I0214 20:45:17.672022  251426 addons.go:69] Setting volcano=true in profile "addons-371781"
	I0214 20:45:17.672050  251426 addons.go:238] Setting addon volcano=true in "addons-371781"
	I0214 20:45:17.672075  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.672420  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.672441  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.672446  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.672460  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.672646  251426 addons.go:69] Setting volumesnapshots=true in profile "addons-371781"
	I0214 20:45:17.672668  251426 addons.go:238] Setting addon volumesnapshots=true in "addons-371781"
	I0214 20:45:17.672696  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.672826  251426 addons.go:69] Setting cloud-spanner=true in profile "addons-371781"
	I0214 20:45:17.672866  251426 addons.go:238] Setting addon cloud-spanner=true in "addons-371781"
	I0214 20:45:17.672904  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.673309  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.673389  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.673543  251426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 20:45:17.673668  251426 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-371781"
	I0214 20:45:17.673747  251426 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-371781"
	I0214 20:45:17.673802  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.674288  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.674353  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.691831  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0214 20:45:17.691844  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0214 20:45:17.691976  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0214 20:45:17.692528  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.693220  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.693240  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.693639  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.693840  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.693915  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I0214 20:45:17.702961  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.702994  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.703009  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.703048  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.703103  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.703138  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.703150  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.703173  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.703698  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0214 20:45:17.704088  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.704131  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.704611  251426 addons.go:238] Setting addon default-storageclass=true in "addons-371781"
	I0214 20:45:17.704658  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.705008  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.705039  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.705235  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I0214 20:45:17.705919  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.706000  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.706129  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.706736  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.706760  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.706894  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.706908  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.706982  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.707115  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.707127  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.707186  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.707241  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.707296  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.707678  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.707720  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.715039  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.715091  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.715242  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.715252  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.715305  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.715346  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.715441  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.715610  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.715837  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.715872  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.715982  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.716023  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.716819  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.716862  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.717824  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.718211  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.718245  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.740690  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37321
	I0214 20:45:17.741239  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.741784  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.741805  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.742948  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0214 20:45:17.743130  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I0214 20:45:17.743277  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.743858  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.743903  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.744150  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.744254  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.745571  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0214 20:45:17.745742  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44191
	I0214 20:45:17.745839  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0214 20:45:17.746077  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.746101  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.746502  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.746701  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.746717  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.746787  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.747492  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.747547  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.747566  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.747583  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.747620  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.747995  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.748048  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.748368  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.748387  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.748436  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.748544  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.748557  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.748987  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.749017  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.749411  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I0214 20:45:17.749845  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.749905  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.749928  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.749976  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.750634  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.750661  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.750731  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.751140  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.751696  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.751713  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.752374  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.752540  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.753071  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.753105  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.753792  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0214 20:45:17.754762  251426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 20:45:17.754914  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37465
	I0214 20:45:17.755402  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.755878  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.755906  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.756042  251426 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 20:45:17.756061  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 20:45:17.756081  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.756184  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.756456  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0214 20:45:17.757059  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.757079  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.757493  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.758061  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.758100  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.758916  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.759242  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.759823  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.759845  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.760030  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.760206  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.760351  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.760481  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.760917  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.761242  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.763391  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.763412  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.763487  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0214 20:45:17.763596  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.763900  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0214 20:45:17.764241  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.764719  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.765247  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.765266  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.765367  251426 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0214 20:45:17.765662  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.766192  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.766227  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.766454  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0214 20:45:17.766556  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0214 20:45:17.767038  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.767308  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.767702  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.767718  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.767782  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.768267  251426 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 20:45:17.768603  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.768634  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.768968  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.769294  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.769708  251426 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 20:45:17.769724  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0214 20:45:17.769742  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.770418  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.770433  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.770852  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.771429  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.771467  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.771800  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.771819  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.772236  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.772485  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.773615  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.774093  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.774163  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.774180  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.774353  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.774547  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.774742  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.775934  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.777032  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.777228  251426 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0214 20:45:17.777509  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0214 20:45:17.777943  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.778295  251426 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0214 20:45:17.778317  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0214 20:45:17.778335  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.778507  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.778523  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.778863  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.778929  251426 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0214 20:45:17.779103  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.780498  251426 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 20:45:17.780515  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 20:45:17.780593  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.781194  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.782486  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 20:45:17.783139  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.783812  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.783831  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.784005  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.784201  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.784426  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.784574  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 20:45:17.784660  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.785213  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.785654  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.785721  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.785879  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.786040  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.786199  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.786327  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.786680  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 20:45:17.787595  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 20:45:17.788459  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 20:45:17.789368  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 20:45:17.790281  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 20:45:17.791256  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 20:45:17.791779  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38939
	I0214 20:45:17.792200  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.792223  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 20:45:17.792240  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 20:45:17.792266  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.792771  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.792791  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.793281  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.793557  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.795778  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.795812  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.796070  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35625
	I0214 20:45:17.796440  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.796462  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.796649  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.796720  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.796892  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.797086  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.797188  251426 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 20:45:17.797288  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.797972  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 20:45:17.797992  251426 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 20:45:17.798011  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.798088  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0214 20:45:17.798112  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.798132  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.798552  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.798655  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.798903  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.799278  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.799293  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.799673  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.799856  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.799916  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41079
	I0214 20:45:17.800269  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.800796  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.800816  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.801017  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33211
	I0214 20:45:17.801354  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.801766  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.801834  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.802548  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0214 20:45:17.803203  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.803296  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.803420  251426 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0214 20:45:17.803439  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.803462  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.803471  251426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0214 20:45:17.803698  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.803720  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.803996  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.804019  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.804071  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.804102  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.804180  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.804312  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.804401  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.804457  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.804556  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.804752  251426 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 20:45:17.804766  251426 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 20:45:17.804784  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.805936  251426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 20:45:17.806348  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.806894  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.807890  251426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 20:45:17.808522  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.809102  251426 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 20:45:17.809123  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0214 20:45:17.809142  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.809892  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.809913  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.810233  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.810403  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.810432  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
	I0214 20:45:17.810603  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.810784  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.811064  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.811170  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.811804  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.811823  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.812242  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.812490  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.813725  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41019
	I0214 20:45:17.813745  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0214 20:45:17.813839  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.813887  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.814173  251426 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 20:45:17.814185  251426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 20:45:17.814201  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.814260  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.814675  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.814747  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.814764  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.815212  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.815228  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.815422  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.815613  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.815643  251426 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-371781"
	I0214 20:45:17.815684  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:17.815997  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.816042  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.816051  251426 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0214 20:45:17.816080  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.816142  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I0214 20:45:17.816326  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.816339  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.816492  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.816513  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.816703  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.816786  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.816970  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44199
	I0214 20:45:17.817017  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.817173  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.817399  251426 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 20:45:17.817416  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 20:45:17.817433  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.818108  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.818107  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.818162  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.818707  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.818728  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.819302  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.820024  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.820047  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.820386  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:17.820475  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:17.820879  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:17.820905  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:17.820913  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:17.820923  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:17.820929  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:17.821755  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.821807  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.821832  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.821851  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.822007  251426 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0214 20:45:17.822105  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.822277  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.822427  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.822615  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.822669  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.822721  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.822779  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:17.822789  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	W0214 20:45:17.822854  251426 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0214 20:45:17.823267  251426 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0214 20:45:17.823257  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.823286  251426 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 20:45:17.823323  251426 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0214 20:45:17.823346  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.823821  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.824085  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.824130  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.824250  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.824437  251426 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 20:45:17.824457  251426 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 20:45:17.824473  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.824442  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.824754  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.824970  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.825098  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.827593  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.827650  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.828195  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.828398  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.828480  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.828575  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.828721  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.828763  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.828773  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.828854  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.828962  251426 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0214 20:45:17.829060  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.828215  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.829242  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.829412  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.829542  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.830188  251426 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0214 20:45:17.830203  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 20:45:17.830217  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.833372  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.833763  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.833782  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.833954  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.834139  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.834300  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.834426  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:17.837076  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0214 20:45:17.837434  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.837813  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.837847  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	W0214 20:45:17.838203  251426 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35154->192.168.39.67:22: read: connection reset by peer
	I0214 20:45:17.838228  251426 retry.go:31] will retry after 161.071982ms: ssh: handshake failed: read tcp 192.168.39.1:35154->192.168.39.67:22: read: connection reset by peer
	I0214 20:45:17.838263  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.838704  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:17.838745  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:17.853178  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0214 20:45:17.853589  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:17.854209  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:17.854235  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:17.854673  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:17.854821  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:17.856591  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:17.857833  251426 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 20:45:17.858767  251426 out.go:177]   - Using image docker.io/busybox:stable
	I0214 20:45:17.859831  251426 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 20:45:17.859844  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 20:45:17.859858  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:17.862736  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.863131  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:17.863314  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:17.863341  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:17.863432  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:17.863536  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:17.863695  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:18.155514  251426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 20:45:18.156491  251426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 20:45:18.158681  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 20:45:18.231302  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0214 20:45:18.271055  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 20:45:18.311523  251426 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0214 20:45:18.311549  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0214 20:45:18.360651  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 20:45:18.360683  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 20:45:18.369989  251426 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 20:45:18.370012  251426 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 20:45:18.394774  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 20:45:18.401494  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 20:45:18.404226  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0214 20:45:18.406968  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 20:45:18.417318  251426 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 20:45:18.417342  251426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 20:45:18.440013  251426 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 20:45:18.440032  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 20:45:18.442537  251426 node_ready.go:35] waiting up to 6m0s for node "addons-371781" to be "Ready" ...
	I0214 20:45:18.467862  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 20:45:18.493981  251426 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 20:45:18.494002  251426 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 20:45:18.495294  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 20:45:18.559913  251426 node_ready.go:49] node "addons-371781" is "Ready"
	I0214 20:45:18.559947  251426 node_ready.go:38] duration metric: took 117.385194ms for node "addons-371781" to be "Ready" ...
	I0214 20:45:18.559964  251426 api_server.go:52] waiting for apiserver process to appear ...
	I0214 20:45:18.560011  251426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 20:45:18.592688  251426 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 20:45:18.592713  251426 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 20:45:18.594083  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 20:45:18.594107  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 20:45:18.610116  251426 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 20:45:18.610132  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 20:45:18.644568  251426 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 20:45:18.644592  251426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 20:45:18.691364  251426 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 20:45:18.691390  251426 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 20:45:18.770984  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 20:45:18.834051  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 20:45:18.834076  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 20:45:18.863629  251426 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 20:45:18.863656  251426 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 20:45:18.891861  251426 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 20:45:18.891891  251426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 20:45:18.908695  251426 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 20:45:18.908720  251426 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 20:45:19.024430  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 20:45:19.024459  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 20:45:19.079922  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 20:45:19.079949  251426 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 20:45:19.087630  251426 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 20:45:19.087656  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 20:45:19.141863  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 20:45:19.284573  251426 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 20:45:19.284620  251426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 20:45:19.330081  251426 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 20:45:19.330106  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 20:45:19.387092  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 20:45:19.634694  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 20:45:19.634720  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 20:45:19.705577  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 20:45:19.875686  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 20:45:19.875719  251426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 20:45:20.245988  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 20:45:20.246015  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 20:45:20.372892  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 20:45:20.372927  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 20:45:20.504130  251426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.347592338s)
	I0214 20:45:20.504569  251426 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0214 20:45:20.504858  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.346053654s)
	I0214 20:45:20.504917  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.273467316s)
	I0214 20:45:20.504928  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:20.504966  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:20.504972  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:20.504989  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:20.507513  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:20.507520  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:20.507531  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:20.507515  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:20.507540  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:20.507549  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:20.507562  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:20.507576  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:20.507630  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:20.507645  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:20.508146  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:20.508161  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:20.508933  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:20.508996  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:20.509016  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:20.715761  251426 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 20:45:20.715796  251426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 20:45:21.021957  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 20:45:21.025548  251426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-371781" context rescaled to 1 replicas
	I0214 20:45:21.047706  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.776606508s)
	I0214 20:45:21.047770  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:21.047784  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:21.048150  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:21.048164  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:21.048175  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:21.048189  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:21.048201  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:21.048420  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:21.048431  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:24.624558  251426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 20:45:24.624613  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:24.628817  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:24.629406  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:24.629435  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:24.629692  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:24.629920  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:24.630113  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:24.630267  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:25.092084  251426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 20:45:25.343041  251426 addons.go:238] Setting addon gcp-auth=true in "addons-371781"
	I0214 20:45:25.343132  251426 host.go:66] Checking if "addons-371781" exists ...
	I0214 20:45:25.343711  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:25.343779  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:25.359672  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45783
	I0214 20:45:25.360133  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:25.360719  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:25.360748  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:25.361077  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:25.361714  251426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:45:25.361766  251426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:45:25.377054  251426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I0214 20:45:25.377545  251426 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:45:25.378038  251426 main.go:141] libmachine: Using API Version  1
	I0214 20:45:25.378060  251426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:45:25.378468  251426 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:45:25.378659  251426 main.go:141] libmachine: (addons-371781) Calling .GetState
	I0214 20:45:25.380341  251426 main.go:141] libmachine: (addons-371781) Calling .DriverName
	I0214 20:45:25.380552  251426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 20:45:25.380575  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHHostname
	I0214 20:45:25.383461  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:25.383893  251426 main.go:141] libmachine: (addons-371781) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:de:60", ip: ""} in network mk-addons-371781: {Iface:virbr1 ExpiryTime:2025-02-14 21:44:47 +0000 UTC Type:0 Mac:52:54:00:f5:de:60 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:addons-371781 Clientid:01:52:54:00:f5:de:60}
	I0214 20:45:25.383921  251426 main.go:141] libmachine: (addons-371781) DBG | domain addons-371781 has defined IP address 192.168.39.67 and MAC address 52:54:00:f5:de:60 in network mk-addons-371781
	I0214 20:45:25.384059  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHPort
	I0214 20:45:25.384252  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHKeyPath
	I0214 20:45:25.384419  251426 main.go:141] libmachine: (addons-371781) Calling .GetSSHUsername
	I0214 20:45:25.384610  251426 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/addons-371781/id_rsa Username:docker}
	I0214 20:45:25.636253  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.241430741s)
	I0214 20:45:25.636293  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.23476033s)
	I0214 20:45:25.636315  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636340  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636356  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636370  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636397  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.23214054s)
	I0214 20:45:25.636436  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636439  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.229448474s)
	I0214 20:45:25.636447  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636461  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636471  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636516  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.168626879s)
	I0214 20:45:25.636548  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636561  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636563  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.141246662s)
	I0214 20:45:25.636583  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636591  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.636636  251426 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (7.076609818s)
	I0214 20:45:25.636657  251426 api_server.go:72] duration metric: took 7.966605322s to wait for apiserver process to appear ...
	I0214 20:45:25.636666  251426 api_server.go:88] waiting for apiserver healthz status ...
	I0214 20:45:25.636689  251426 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0214 20:45:25.636900  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.636926  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.636942  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.636971  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.636977  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.636985  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.636992  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637048  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637058  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637061  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637066  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637072  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637075  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637081  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637088  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637122  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.49523258s)
	I0214 20:45:25.637141  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637149  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637161  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637164  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637171  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637178  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637185  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637228  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637240  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.250117463s)
	I0214 20:45:25.637259  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637257  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637268  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637271  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637276  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637283  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.637401  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.931793171s)
	W0214 20:45:25.637433  251426 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 20:45:25.637455  251426 retry.go:31] will retry after 220.777602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 20:45:25.637507  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637531  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637560  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637573  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637726  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637767  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637774  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637808  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.637827  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.637834  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.637841  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.637847  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.638083  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.638122  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.638129  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.638140  251426 addons.go:479] Verifying addon metrics-server=true in "addons-371781"
	I0214 20:45:25.637054  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.86603706s)
	I0214 20:45:25.640552  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.640567  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.640577  251426 addons.go:479] Verifying addon ingress=true in "addons-371781"
	I0214 20:45:25.640581  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.640593  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.640753  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.640780  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.640788  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.640976  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.641006  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.641019  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.641028  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.641034  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.641032  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.641066  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.641078  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.641141  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.641150  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.641157  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.641163  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.641424  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.641452  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.641458  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.641920  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.641958  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.641965  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.641975  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.641983  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.642044  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.642064  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.642071  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.642604  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:25.642823  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.642843  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.642852  251426 addons.go:479] Verifying addon registry=true in "addons-371781"
	I0214 20:45:25.642960  251426 out.go:177] * Verifying ingress addon...
	I0214 20:45:25.643795  251426 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-371781 service yakd-dashboard -n yakd-dashboard
	
	I0214 20:45:25.644439  251426 out.go:177] * Verifying registry addon...
	I0214 20:45:25.645380  251426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 20:45:25.646291  251426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0214 20:45:25.649980  251426 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0214 20:45:25.658262  251426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 20:45:25.658284  251426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 20:45:25.658299  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:25.658284  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:25.658780  251426 api_server.go:141] control plane version: v1.32.1
	I0214 20:45:25.658814  251426 api_server.go:131] duration metric: took 22.139384ms to wait for apiserver health ...
	I0214 20:45:25.658829  251426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 20:45:25.676429  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.676449  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.676809  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.676832  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	W0214 20:45:25.676935  251426 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0214 20:45:25.679227  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:25.679249  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:25.679463  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:25.679482  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:25.743942  251426 system_pods.go:59] 16 kube-system pods found
	I0214 20:45:25.743995  251426 system_pods.go:61] "amd-gpu-device-plugin-xpmjz" [b3a0d9c2-1c02-46b4-a614-9bceca103c13] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0214 20:45:25.744007  251426 system_pods.go:61] "coredns-668d6bf9bc-krs7x" [ee300442-dde7-4afc-af75-8726fe08706e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 20:45:25.744015  251426 system_pods.go:61] "coredns-668d6bf9bc-tdch9" [b4c6cc13-5d14-41f3-84d0-c6d356e99f51] Running
	I0214 20:45:25.744025  251426 system_pods.go:61] "etcd-addons-371781" [2ba8c5aa-4fdf-485b-8e15-bd71e3acf96b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 20:45:25.744036  251426 system_pods.go:61] "kube-apiserver-addons-371781" [002911dc-7624-4c52-abb0-182be6132a95] Running
	I0214 20:45:25.744042  251426 system_pods.go:61] "kube-controller-manager-addons-371781" [76c1f427-1a79-4f17-9a24-c59a0b2ee0c7] Running
	I0214 20:45:25.744050  251426 system_pods.go:61] "kube-ingress-dns-minikube" [c56b6b2e-64f2-4ced-996a-7fd6b0aa3527] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 20:45:25.744058  251426 system_pods.go:61] "kube-proxy-5l22r" [74b98ef1-5fd7-4241-8ddd-a1bd50c13c63] Running
	I0214 20:45:25.744072  251426 system_pods.go:61] "kube-scheduler-addons-371781" [bba1a743-16aa-4743-9a80-7291067d5bf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 20:45:25.744084  251426 system_pods.go:61] "metrics-server-7fbb699795-rts29" [ebeaa3ab-84cc-437b-bd18-6c140dad9938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 20:45:25.744097  251426 system_pods.go:61] "nvidia-device-plugin-daemonset-dnfkm" [9d95b55b-46ad-487c-9125-9f8b59218d7b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 20:45:25.744109  251426 system_pods.go:61] "registry-6c88467877-6d8q9" [112052e6-40f0-43f6-8eab-72c10cd3b9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 20:45:25.744122  251426 system_pods.go:61] "registry-proxy-q4kfl" [38d75acd-d5f9-40f8-a54f-f648b3982f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 20:45:25.744134  251426 system_pods.go:61] "snapshot-controller-68b874b76f-5828g" [4048a80d-0b2c-4ee5-ae9b-3d2f1d29207d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 20:45:25.744146  251426 system_pods.go:61] "snapshot-controller-68b874b76f-r7bxz" [b0d7d98a-1132-42ae-9f8e-944483d54a01] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 20:45:25.744155  251426 system_pods.go:61] "storage-provisioner" [a316e15d-761c-4a1f-8af0-f3c68c0e680f] Running
	I0214 20:45:25.744167  251426 system_pods.go:74] duration metric: took 85.327327ms to wait for pod list to return data ...
	I0214 20:45:25.744180  251426 default_sa.go:34] waiting for default service account to be created ...
	I0214 20:45:25.760075  251426 default_sa.go:45] found service account: "default"
	I0214 20:45:25.760098  251426 default_sa.go:55] duration metric: took 15.906839ms for default service account to be created ...
	I0214 20:45:25.760109  251426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 20:45:25.766293  251426 system_pods.go:86] 16 kube-system pods found
	I0214 20:45:25.766326  251426 system_pods.go:89] "amd-gpu-device-plugin-xpmjz" [b3a0d9c2-1c02-46b4-a614-9bceca103c13] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0214 20:45:25.766337  251426 system_pods.go:89] "coredns-668d6bf9bc-krs7x" [ee300442-dde7-4afc-af75-8726fe08706e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 20:45:25.766350  251426 system_pods.go:89] "coredns-668d6bf9bc-tdch9" [b4c6cc13-5d14-41f3-84d0-c6d356e99f51] Running
	I0214 20:45:25.766360  251426 system_pods.go:89] "etcd-addons-371781" [2ba8c5aa-4fdf-485b-8e15-bd71e3acf96b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 20:45:25.766369  251426 system_pods.go:89] "kube-apiserver-addons-371781" [002911dc-7624-4c52-abb0-182be6132a95] Running
	I0214 20:45:25.766377  251426 system_pods.go:89] "kube-controller-manager-addons-371781" [76c1f427-1a79-4f17-9a24-c59a0b2ee0c7] Running
	I0214 20:45:25.766389  251426 system_pods.go:89] "kube-ingress-dns-minikube" [c56b6b2e-64f2-4ced-996a-7fd6b0aa3527] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 20:45:25.766398  251426 system_pods.go:89] "kube-proxy-5l22r" [74b98ef1-5fd7-4241-8ddd-a1bd50c13c63] Running
	I0214 20:45:25.766411  251426 system_pods.go:89] "kube-scheduler-addons-371781" [bba1a743-16aa-4743-9a80-7291067d5bf0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 20:45:25.766420  251426 system_pods.go:89] "metrics-server-7fbb699795-rts29" [ebeaa3ab-84cc-437b-bd18-6c140dad9938] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0214 20:45:25.766433  251426 system_pods.go:89] "nvidia-device-plugin-daemonset-dnfkm" [9d95b55b-46ad-487c-9125-9f8b59218d7b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0214 20:45:25.766447  251426 system_pods.go:89] "registry-6c88467877-6d8q9" [112052e6-40f0-43f6-8eab-72c10cd3b9aa] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0214 20:45:25.766460  251426 system_pods.go:89] "registry-proxy-q4kfl" [38d75acd-d5f9-40f8-a54f-f648b3982f76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0214 20:45:25.766470  251426 system_pods.go:89] "snapshot-controller-68b874b76f-5828g" [4048a80d-0b2c-4ee5-ae9b-3d2f1d29207d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 20:45:25.766482  251426 system_pods.go:89] "snapshot-controller-68b874b76f-r7bxz" [b0d7d98a-1132-42ae-9f8e-944483d54a01] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0214 20:45:25.766491  251426 system_pods.go:89] "storage-provisioner" [a316e15d-761c-4a1f-8af0-f3c68c0e680f] Running
	I0214 20:45:25.766505  251426 system_pods.go:126] duration metric: took 6.374216ms to wait for k8s-apps to be running ...
	I0214 20:45:25.766518  251426 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 20:45:25.766572  251426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 20:45:25.859161  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 20:45:26.152148  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:26.152506  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:26.649596  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:26.649813  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:27.150816  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:27.151049  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:27.605111  251426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.224532135s)
	I0214 20:45:27.605164  251426 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.838567039s)
	I0214 20:45:27.605197  251426 system_svc.go:56] duration metric: took 1.83867397s WaitForService to wait for kubelet
	I0214 20:45:27.605215  251426 kubeadm.go:578] duration metric: took 9.935161713s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 20:45:27.605268  251426 node_conditions.go:102] verifying NodePressure condition ...
	I0214 20:45:27.605532  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.583530723s)
	I0214 20:45:27.605578  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:27.605596  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:27.605943  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:27.605964  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:27.605974  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:27.605981  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:27.606298  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:27.606332  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:27.606356  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:27.606378  251426 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-371781"
	I0214 20:45:27.606424  251426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0214 20:45:27.607582  251426 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 20:45:27.608865  251426 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0214 20:45:27.609573  251426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 20:45:27.609645  251426 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 20:45:27.609664  251426 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 20:45:27.628289  251426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 20:45:27.628312  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:27.635406  251426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 20:45:27.635433  251426 node_conditions.go:123] node cpu capacity is 2
	I0214 20:45:27.635447  251426 node_conditions.go:105] duration metric: took 30.174268ms to run NodePressure ...
	I0214 20:45:27.635463  251426 start.go:241] waiting for startup goroutines ...
	I0214 20:45:27.655315  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:27.655386  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:27.667745  251426 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 20:45:27.667774  251426 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 20:45:27.703147  251426 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 20:45:27.703173  251426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0214 20:45:27.720256  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.861044637s)
	I0214 20:45:27.720325  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:27.720354  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:27.720620  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:27.720637  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:27.720649  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:27.720657  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:27.720661  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:27.720861  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:27.720883  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:27.766485  251426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 20:45:28.113583  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:28.147883  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:28.149505  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:28.613630  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:28.714947  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:28.715101  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:28.951054  251426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.184530132s)
	I0214 20:45:28.951130  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:28.951149  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:28.951510  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:28.951531  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:28.951543  251426 main.go:141] libmachine: Making call to close driver server
	I0214 20:45:28.951553  251426 main.go:141] libmachine: (addons-371781) Calling .Close
	I0214 20:45:28.951826  251426 main.go:141] libmachine: Successfully made call to close driver server
	I0214 20:45:28.951844  251426 main.go:141] libmachine: (addons-371781) DBG | Closing plugin on server side
	I0214 20:45:28.951853  251426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 20:45:28.952780  251426 addons.go:479] Verifying addon gcp-auth=true in "addons-371781"
	I0214 20:45:28.954943  251426 out.go:177] * Verifying gcp-auth addon...
	I0214 20:45:28.956974  251426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 20:45:28.986498  251426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 20:45:28.986524  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:29.114158  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:29.162587  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:29.166357  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:29.460299  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:29.613728  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:29.648188  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:29.648587  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:29.960405  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:30.113495  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:30.214689  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:30.214935  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:30.461766  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:30.613193  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:30.651336  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:30.651682  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:30.959993  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:31.113380  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:31.149251  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:31.149660  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:31.460527  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:31.612578  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:31.713474  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:31.713807  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:31.960908  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:32.113540  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:32.150261  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:32.150386  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:32.460108  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:32.612815  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:32.653302  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:32.653762  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:32.960597  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:33.113972  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:33.150179  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:33.151046  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:33.461799  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:33.612732  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:33.648680  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:33.649819  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:33.960112  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:34.114826  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:34.148612  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:34.150213  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:34.461429  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:34.614264  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:34.649229  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:34.649395  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:34.960761  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:35.113158  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:35.148802  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:35.149860  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:35.460019  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:35.614074  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:35.648605  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:35.650028  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:35.960879  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:36.113375  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:36.148737  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:36.148835  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:36.459609  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:36.612693  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:36.648506  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:36.650225  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:36.960638  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:37.113099  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:37.148438  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:37.150638  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:37.460596  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:37.613230  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:37.649237  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:37.649265  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:37.959900  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:38.112949  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:38.148504  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:38.148984  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:38.460284  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:38.613491  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:38.649171  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:38.649422  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:38.960805  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:39.112679  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:39.150003  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:39.150116  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:39.460122  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:39.613786  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:39.648776  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:39.649816  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:39.960219  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:40.113155  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:40.148782  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:40.149691  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:40.460448  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:40.616119  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:40.648108  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:40.649655  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:40.960698  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:41.112890  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:41.148538  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:41.150101  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:41.459874  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:41.613399  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:41.648884  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:41.650212  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:41.959773  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:42.112887  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:42.149620  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:42.150759  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:42.461201  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:42.612514  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:42.649631  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:42.650266  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:43.032368  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:43.114281  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:43.151396  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:43.152753  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:43.460747  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:43.612814  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:43.648278  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:43.649251  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:43.960012  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:44.113040  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:44.148611  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:44.149778  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:44.459712  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:44.612390  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:44.649313  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:44.649494  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:44.960218  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:45.155067  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:45.155219  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:45.155317  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:45.460466  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:45.613717  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:45.648727  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:45.649566  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:45.960547  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:46.113998  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:46.148360  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:46.150040  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:46.460602  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:46.612277  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:46.651217  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:46.654573  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:46.960276  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:47.113496  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:47.149059  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:47.149233  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:47.461066  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:47.612655  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:47.648053  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:47.649197  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:47.959737  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:48.112283  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:48.149263  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:48.149458  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:48.459920  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:48.613282  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:48.648809  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:48.649150  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:48.960505  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:49.114052  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:49.149956  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:49.151289  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:49.460514  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:49.613540  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:49.648726  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:49.649394  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:49.960876  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:50.115666  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:50.147987  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:50.148913  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:50.460029  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:50.613351  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:50.649224  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:50.649391  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:50.959957  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:51.112904  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:51.148099  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:51.149695  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:51.460911  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:51.613618  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:51.648984  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:51.649661  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:51.960821  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:52.114659  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:52.148789  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:52.149325  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:52.460150  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:52.612933  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:52.648629  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:52.649931  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:52.959640  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:53.113567  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:53.148119  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:53.149683  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:53.460512  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:53.613858  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:53.648366  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:53.650117  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:53.960146  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:54.114168  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:54.149895  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:54.150221  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:54.460392  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:54.613912  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:54.648599  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:54.649404  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:54.961795  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:55.113596  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:55.150382  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:55.151444  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:55.461119  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:55.613267  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:55.650077  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:55.650465  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:55.960096  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:56.113568  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:56.149169  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:56.150248  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:56.460564  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:56.613537  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:56.647810  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:56.649365  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:56.960000  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:57.115666  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:57.148858  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:57.149581  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:57.460381  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:57.612784  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:57.648891  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:57.653485  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:57.960407  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:58.115659  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:58.149039  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:58.149817  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:58.464967  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:58.614558  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:58.650469  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:58.652284  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:58.960055  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:59.112950  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:59.148658  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:59.149301  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:59.460692  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:45:59.614473  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:45:59.649447  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:45:59.650094  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:45:59.960546  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:00.112756  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:00.150456  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:00.151159  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:00.460385  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:00.613964  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:00.648879  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:00.650180  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:00.960020  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:01.112748  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:01.148303  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:01.148964  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:01.460296  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:01.613953  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:01.648269  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:01.649398  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:01.960239  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:02.113890  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:02.147968  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:02.149303  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:02.460728  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:02.612998  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:02.648280  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:02.650224  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:02.960614  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:03.113139  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:03.148974  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:03.150596  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:03.460774  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:03.612914  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:03.648840  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:03.649676  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:03.960472  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:04.114246  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:04.151775  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:04.151886  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:04.467953  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:04.614817  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:04.648059  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:04.649144  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:04.960050  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:05.119404  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:05.150330  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:05.155057  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:05.459642  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:05.611904  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:05.649914  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:05.650167  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:05.959293  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:06.112667  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:06.147446  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:06.148852  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:06.459975  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:06.613615  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:06.648943  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:06.649519  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:06.960435  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:07.112933  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:07.148952  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:07.150011  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:07.460628  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:07.613684  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:07.648468  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:07.649596  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:07.960539  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:08.112552  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:08.148969  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 20:46:08.149793  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:08.460820  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:08.613624  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:08.656383  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:08.657081  251426 kapi.go:107] duration metric: took 43.010788402s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 20:46:08.960395  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:09.113865  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:09.147985  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:09.459775  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:09.612967  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:09.648261  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:09.960446  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:10.113416  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:10.148484  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:10.460551  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:10.612789  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:10.648412  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:10.959783  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:11.112923  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:11.147991  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:11.459608  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:11.612223  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:11.648219  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:11.960045  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:12.113995  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:12.148371  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:12.460348  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:12.616342  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:12.648735  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:12.962762  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:13.112841  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:13.148211  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:13.459549  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:13.612945  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:13.648832  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:13.959453  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:14.114004  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:14.149811  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:14.460564  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:14.612778  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:14.713795  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:14.961585  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:15.112944  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:15.148114  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:15.460185  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:15.613308  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:15.649042  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:15.959609  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:16.112484  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:16.148946  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:16.461328  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:16.615418  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:16.648452  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:16.960330  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:17.113272  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:17.148212  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:17.459826  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:17.615500  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:17.649283  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:17.959788  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:18.117216  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:18.150451  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:18.460241  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:18.613145  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:18.648794  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:18.961041  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:19.115504  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:19.150093  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:19.459977  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:19.619933  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:19.719435  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:19.960021  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:20.112515  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:20.148824  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:20.460486  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:20.613784  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:20.648746  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:20.959764  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:21.114389  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:21.214872  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:21.464472  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:21.613400  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:21.648452  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:21.959973  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:22.113306  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:22.149067  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:22.459565  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:22.616130  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:22.648305  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:22.960320  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:23.113690  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:23.149235  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:23.470486  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:23.614053  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:23.648616  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:23.960543  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:24.113818  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:24.148115  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:24.476962  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:24.614288  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:24.651072  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:24.960412  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:25.114727  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:25.148251  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:25.459918  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:25.613064  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:25.648172  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:25.960627  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:26.113448  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:26.149114  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:26.459791  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:26.613187  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:26.649057  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:26.959661  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:27.112829  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:27.148079  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:27.459846  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:27.613703  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:27.647979  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:27.959310  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:28.113333  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:28.148830  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:28.459807  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:28.612742  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:28.647921  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:28.959627  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:29.112638  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:29.147465  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:29.460354  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:29.615558  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:29.648829  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:29.966493  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:30.114025  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 20:46:30.148613  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:30.460800  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:30.613127  251426 kapi.go:107] duration metric: took 1m3.003546897s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 20:46:30.649095  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:30.961480  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:31.150382  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:31.463185  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:31.649932  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:31.962073  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:32.149401  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:32.460890  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:32.649894  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:32.966739  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:33.148786  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:33.460538  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:33.648417  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:33.960097  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:34.149531  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:34.460943  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:34.649912  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:34.960503  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:35.148989  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:35.462648  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:35.648868  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:35.959911  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:36.149751  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:36.460874  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:36.659490  251426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 20:46:36.961677  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:37.148313  251426 kapi.go:107] duration metric: took 1m11.502929335s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 20:46:37.459723  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:37.960903  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:38.466806  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:38.959726  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:39.459770  251426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 20:46:39.959926  251426 kapi.go:107] duration metric: took 1m11.002941766s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 20:46:39.961278  251426 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-371781 cluster.
	I0214 20:46:39.962446  251426 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 20:46:39.963515  251426 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 20:46:39.964619  251426 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0214 20:46:39.965562  251426 addons.go:514] duration metric: took 1m22.295491513s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0214 20:46:39.965601  251426 start.go:246] waiting for cluster config update ...
	I0214 20:46:39.965622  251426 start.go:255] writing updated cluster config ...
	I0214 20:46:39.965907  251426 ssh_runner.go:195] Run: rm -f paused
	I0214 20:46:39.972583  251426 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 20:46:39.975700  251426 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-krs7x" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.979456  251426 pod_ready.go:94] pod "coredns-668d6bf9bc-krs7x" is "Ready"
	I0214 20:46:39.979473  251426 pod_ready.go:86] duration metric: took 3.749903ms for pod "coredns-668d6bf9bc-krs7x" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.981148  251426 pod_ready.go:83] waiting for pod "etcd-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.984796  251426 pod_ready.go:94] pod "etcd-addons-371781" is "Ready"
	I0214 20:46:39.984811  251426 pod_ready.go:86] duration metric: took 3.647023ms for pod "etcd-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.986482  251426 pod_ready.go:83] waiting for pod "kube-apiserver-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.993526  251426 pod_ready.go:94] pod "kube-apiserver-addons-371781" is "Ready"
	I0214 20:46:39.993545  251426 pod_ready.go:86] duration metric: took 7.043875ms for pod "kube-apiserver-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:39.995206  251426 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:40.376084  251426 pod_ready.go:94] pod "kube-controller-manager-addons-371781" is "Ready"
	I0214 20:46:40.376116  251426 pod_ready.go:86] duration metric: took 380.892471ms for pod "kube-controller-manager-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:40.575876  251426 pod_ready.go:83] waiting for pod "kube-proxy-5l22r" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:40.975857  251426 pod_ready.go:94] pod "kube-proxy-5l22r" is "Ready"
	I0214 20:46:40.975885  251426 pod_ready.go:86] duration metric: took 399.983875ms for pod "kube-proxy-5l22r" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:41.176205  251426 pod_ready.go:83] waiting for pod "kube-scheduler-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:41.576227  251426 pod_ready.go:94] pod "kube-scheduler-addons-371781" is "Ready"
	I0214 20:46:41.576252  251426 pod_ready.go:86] duration metric: took 400.020089ms for pod "kube-scheduler-addons-371781" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 20:46:41.576261  251426 pod_ready.go:40] duration metric: took 1.603642623s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 20:46:41.622073  251426 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 20:46:41.623506  251426 out.go:177] * Done! kubectl is now configured to use "addons-371781" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.827449761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=365e47f1-41c9-4702-b64f-f2660bc669cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.827495999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=365e47f1-41c9-4702-b64f-f2660bc669cc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.827758273Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f376776943ae5779a073cbcd09fce5fd4e48a411a92c89f7b964fca1687caa72,PodSandboxId:25a5dfde136cdcae9c79721f427f12a78447b09e341d5631b0014cea1b1bf004,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739566047806882660,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a8f5dee-4e76-40b6-8543-4ddcdad0735f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81aeafd1fdb26f88f02e6d91397d1c9534ed7aac7ca5097c8ba55159a640980f,PodSandboxId:8134cc8e6c5fd7e964df6a872178c10d2ecae56dc4894e42efa5f57cd714aa0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739566004704981326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d279f116-7ecb-4389-b50c-dc4e1e6388ca,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800cfa5e7081231b6a72ff116d65c746e4bb4c1e54ed7febad7d5477685b4963,PodSandboxId:be8a1c7910c6ae77c13411ea4a0bd0ae8693286a010c9f6cf797533aa668a7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739565996464318245,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-6b86h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0582072-1658-4665-8212-99897ac565fe,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bce16a58627926954e93621dbcc87b1bb36a9ad93505d0b172f75bff26833916,PodSandboxId:a08ba68b574c299bbadda440edeae815c92e94cb91becf30aaed05fe7589a6c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565979639321616,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fcx7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28772954-5927-42d9-b143-0334e8be9273,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745d7d97e38bd8a4172fd725c032e3b17b7a43c53b45e12af32c747ddd436830,PodSandboxId:21912f27fe680b82e3fcd12ec8fe2daffad39f42116ef73bcdbaa8c7631797df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565974588894745,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fdkfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ccc9d28-2b08-406b-bc68-d4c20ea764c4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ebdca2615299bd98c6e78827d0be8a9f7046dd544fba7321f8d0482be94b67,PodSandboxId:3793af901a894317d81c292063ddaf78aaed1c2adeb8a03771facea80a9a85a7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739565953091669091,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xpmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0d9c2-1c02-46b4-a614-9bceca103c13,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb1402b98fe88f1a6cead6e1e4c488d54c08dfb4eb8c29954873406ef1f6506,PodSandboxId:d64f64c84820d0ab7159a489a0049fe0503f65407dc026ef2c3c1d416160d570,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739565934007035953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b6b2e-64f2-4ced-996a-7fd6b0aa3527,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7dc5dc33049319651a21dce9dcd840ed5a4d0e3c77c661e9e1c754c60ecebe4,PodSandboxId:24139eac0ac3e60f3eea81db28bfaa46767c9e1ed656179a09d053993303c28c,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739565924106172645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a316e15d-761c-4a1f-8af0-f3c68c0e680f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:313b90425eca60dea2c64fd76adad1ffafe34a654bd8350d53543324fa465879,PodSandboxId:50980b153b3ad6c8a10045e8772ae2ddcdf541ba45befb2a47fd73594af6456e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739565921190622220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-krs7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee300442-dde7-4afc-af75-8726fe08706e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e6e88c69a4923b7ba3b1757db24803ac293ea54593cd58ce109e2edd2f55cce6,PodSandboxId:2f4b4fd83f84893b229cec5bf2a19460d8e5787b5bf6a51a117fee142da0ecd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739565918849989779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l22r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b98ef1-5fd7-4241-8ddd-a1bd50c13c63,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90cda719328509e5f6296a4bbb4
e05dc570e1b323f783d95d035de43ea3ad64,PodSandboxId:888ef62e8fbcf1a00e11023d010605e97fbc1dc635d0271e173845066edaa6d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739565907532761541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727d8583ab29bc9122bd598bf052fcab,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c0c8ab6266303028344ae2cb4b4561c4025ab295027c8e1be1d419cdef301,PodSandbox
Id:64ee04002e52390a5c7e2bce409e78c7d17cc9a9a0856f29dc601f14ebfd17eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739565907487058460,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44afe658505b0d8198ec01850810f536,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddcfa75cb1ea302af043b4236370acb1b0adb0a8e11d80018e4bb79dbc1e25ce,PodSandboxId:a90176d663c8d4
4b0f5dca6b8a592870e29e28739a3ac8a04834df797bf43531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739565907481628356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56d311d3211aeecbd2689c6a1a7cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2b22157ff68dee6d062c660d2159ab320eb2d194664da55c453f68bb35ab18,PodSandboxId:d6d90
0ea2318014bbad8d8326bccecf6ed73ee285dfd6d3383f7bc4781c4e39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739565907417525293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d57bb9b871f9f17fd1371878a516e5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=365e47f1-41c9-4702-b64f-f2660bc669cc name=/runtime.v1.RuntimeServ
ice/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.837506945Z" level=debug msg="Applying tar in /var/lib/containers/storage/overlay/385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da/diff" file="overlay/overlay.go:2160"
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.893512121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a37dc62-87be-4fee-ba22-626d5fccdf9f name=/runtime.v1.RuntimeService/Version
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.893569353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a37dc62-87be-4fee-ba22-626d5fccdf9f name=/runtime.v1.RuntimeService/Version
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.895149615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e873e89b-b0e4-40a3-bb87-a1cef38ccf4d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.897423526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566185897399695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e873e89b-b0e4-40a3-bb87-a1cef38ccf4d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.898320942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64933d23-0c05-4e1d-afea-68f416294761 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.898376260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64933d23-0c05-4e1d-afea-68f416294761 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.898633170Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f376776943ae5779a073cbcd09fce5fd4e48a411a92c89f7b964fca1687caa72,PodSandboxId:25a5dfde136cdcae9c79721f427f12a78447b09e341d5631b0014cea1b1bf004,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739566047806882660,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a8f5dee-4e76-40b6-8543-4ddcdad0735f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81aeafd1fdb26f88f02e6d91397d1c9534ed7aac7ca5097c8ba55159a640980f,PodSandboxId:8134cc8e6c5fd7e964df6a872178c10d2ecae56dc4894e42efa5f57cd714aa0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739566004704981326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d279f116-7ecb-4389-b50c-dc4e1e6388ca,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800cfa5e7081231b6a72ff116d65c746e4bb4c1e54ed7febad7d5477685b4963,PodSandboxId:be8a1c7910c6ae77c13411ea4a0bd0ae8693286a010c9f6cf797533aa668a7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739565996464318245,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-6b86h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0582072-1658-4665-8212-99897ac565fe,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bce16a58627926954e93621dbcc87b1bb36a9ad93505d0b172f75bff26833916,PodSandboxId:a08ba68b574c299bbadda440edeae815c92e94cb91becf30aaed05fe7589a6c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565979639321616,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fcx7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28772954-5927-42d9-b143-0334e8be9273,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745d7d97e38bd8a4172fd725c032e3b17b7a43c53b45e12af32c747ddd436830,PodSandboxId:21912f27fe680b82e3fcd12ec8fe2daffad39f42116ef73bcdbaa8c7631797df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565974588894745,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fdkfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ccc9d28-2b08-406b-bc68-d4c20ea764c4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ebdca2615299bd98c6e78827d0be8a9f7046dd544fba7321f8d0482be94b67,PodSandboxId:3793af901a894317d81c292063ddaf78aaed1c2adeb8a03771facea80a9a85a7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739565953091669091,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xpmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0d9c2-1c02-46b4-a614-9bceca103c13,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb1402b98fe88f1a6cead6e1e4c488d54c08dfb4eb8c29954873406ef1f6506,PodSandboxId:d64f64c84820d0ab7159a489a0049fe0503f65407dc026ef2c3c1d416160d570,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739565934007035953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b6b2e-64f2-4ced-996a-7fd6b0aa3527,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7dc5dc33049319651a21dce9dcd840ed5a4d0e3c77c661e9e1c754c60ecebe4,PodSandboxId:24139eac0ac3e60f3eea81db28bfaa46767c9e1ed656179a09d053993303c28c,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739565924106172645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a316e15d-761c-4a1f-8af0-f3c68c0e680f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:313b90425eca60dea2c64fd76adad1ffafe34a654bd8350d53543324fa465879,PodSandboxId:50980b153b3ad6c8a10045e8772ae2ddcdf541ba45befb2a47fd73594af6456e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739565921190622220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-krs7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee300442-dde7-4afc-af75-8726fe08706e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e6e88c69a4923b7ba3b1757db24803ac293ea54593cd58ce109e2edd2f55cce6,PodSandboxId:2f4b4fd83f84893b229cec5bf2a19460d8e5787b5bf6a51a117fee142da0ecd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739565918849989779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l22r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b98ef1-5fd7-4241-8ddd-a1bd50c13c63,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90cda719328509e5f6296a4bbb4
e05dc570e1b323f783d95d035de43ea3ad64,PodSandboxId:888ef62e8fbcf1a00e11023d010605e97fbc1dc635d0271e173845066edaa6d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739565907532761541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727d8583ab29bc9122bd598bf052fcab,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c0c8ab6266303028344ae2cb4b4561c4025ab295027c8e1be1d419cdef301,PodSandbox
Id:64ee04002e52390a5c7e2bce409e78c7d17cc9a9a0856f29dc601f14ebfd17eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739565907487058460,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44afe658505b0d8198ec01850810f536,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddcfa75cb1ea302af043b4236370acb1b0adb0a8e11d80018e4bb79dbc1e25ce,PodSandboxId:a90176d663c8d4
4b0f5dca6b8a592870e29e28739a3ac8a04834df797bf43531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739565907481628356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56d311d3211aeecbd2689c6a1a7cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2b22157ff68dee6d062c660d2159ab320eb2d194664da55c453f68bb35ab18,PodSandboxId:d6d90
0ea2318014bbad8d8326bccecf6ed73ee285dfd6d3383f7bc4781c4e39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739565907417525293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d57bb9b871f9f17fd1371878a516e5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64933d23-0c05-4e1d-afea-68f416294761 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.963273067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dca2373-924a-49db-8ce7-fd6d3ef8ce00 name=/runtime.v1.RuntimeService/Version
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.963458221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dca2373-924a-49db-8ce7-fd6d3ef8ce00 name=/runtime.v1.RuntimeService/Version
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.965866481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d747d6d-4983-471b-8b00-567f661f2d82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.967077466Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566185967057074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d747d6d-4983-471b-8b00-567f661f2d82 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.967723174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b8738dd-a3d3-4efc-8c10-d0228f91ec19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.967818589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b8738dd-a3d3-4efc-8c10-d0228f91ec19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.968348720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f376776943ae5779a073cbcd09fce5fd4e48a411a92c89f7b964fca1687caa72,PodSandboxId:25a5dfde136cdcae9c79721f427f12a78447b09e341d5631b0014cea1b1bf004,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739566047806882660,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a8f5dee-4e76-40b6-8543-4ddcdad0735f,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81aeafd1fdb26f88f02e6d91397d1c9534ed7aac7ca5097c8ba55159a640980f,PodSandboxId:8134cc8e6c5fd7e964df6a872178c10d2ecae56dc4894e42efa5f57cd714aa0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739566004704981326,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d279f116-7ecb-4389-b50c-dc4e1e6388ca,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:800cfa5e7081231b6a72ff116d65c746e4bb4c1e54ed7febad7d5477685b4963,PodSandboxId:be8a1c7910c6ae77c13411ea4a0bd0ae8693286a010c9f6cf797533aa668a7fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739565996464318245,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-6b86h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a0582072-1658-4665-8212-99897ac565fe,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:bce16a58627926954e93621dbcc87b1bb36a9ad93505d0b172f75bff26833916,PodSandboxId:a08ba68b574c299bbadda440edeae815c92e94cb91becf30aaed05fe7589a6c5,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565979639321616,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-fcx7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 28772954-5927-42d9-b143-0334e8be9273,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:745d7d97e38bd8a4172fd725c032e3b17b7a43c53b45e12af32c747ddd436830,PodSandboxId:21912f27fe680b82e3fcd12ec8fe2daffad39f42116ef73bcdbaa8c7631797df,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739565974588894745,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fdkfh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6ccc9d28-2b08-406b-bc68-d4c20ea764c4,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ebdca2615299bd98c6e78827d0be8a9f7046dd544fba7321f8d0482be94b67,PodSandboxId:3793af901a894317d81c292063ddaf78aaed1c2adeb8a03771facea80a9a85a7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739565953091669091,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-xpmjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3a0d9c2-1c02-46b4-a614-9bceca103c13,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfb1402b98fe88f1a6cead6e1e4c488d54c08dfb4eb8c29954873406ef1f6506,PodSandboxId:d64f64c84820d0ab7159a489a0049fe0503f65407dc026ef2c3c1d416160d570,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739565934007035953,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c56b6b2e-64f2-4ced-996a-7fd6b0aa3527,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7dc5dc33049319651a21dce9dcd840ed5a4d0e3c77c661e9e1c754c60ecebe4,PodSandboxId:24139eac0ac3e60f3eea81db28bfaa46767c9e1ed656179a09d053993303c28c,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739565924106172645,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a316e15d-761c-4a1f-8af0-f3c68c0e680f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:313b90425eca60dea2c64fd76adad1ffafe34a654bd8350d53543324fa465879,PodSandboxId:50980b153b3ad6c8a10045e8772ae2ddcdf541ba45befb2a47fd73594af6456e,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739565921190622220,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-krs7x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee300442-dde7-4afc-af75-8726fe08706e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:e6e88c69a4923b7ba3b1757db24803ac293ea54593cd58ce109e2edd2f55cce6,PodSandboxId:2f4b4fd83f84893b229cec5bf2a19460d8e5787b5bf6a51a117fee142da0ecd3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739565918849989779,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5l22r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b98ef1-5fd7-4241-8ddd-a1bd50c13c63,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90cda719328509e5f6296a4bbb4
e05dc570e1b323f783d95d035de43ea3ad64,PodSandboxId:888ef62e8fbcf1a00e11023d010605e97fbc1dc635d0271e173845066edaa6d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739565907532761541,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727d8583ab29bc9122bd598bf052fcab,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:741c0c8ab6266303028344ae2cb4b4561c4025ab295027c8e1be1d419cdef301,PodSandbox
Id:64ee04002e52390a5c7e2bce409e78c7d17cc9a9a0856f29dc601f14ebfd17eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739565907487058460,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44afe658505b0d8198ec01850810f536,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddcfa75cb1ea302af043b4236370acb1b0adb0a8e11d80018e4bb79dbc1e25ce,PodSandboxId:a90176d663c8d4
4b0f5dca6b8a592870e29e28739a3ac8a04834df797bf43531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739565907481628356,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e56d311d3211aeecbd2689c6a1a7cd2b,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f2b22157ff68dee6d062c660d2159ab320eb2d194664da55c453f68bb35ab18,PodSandboxId:d6d90
0ea2318014bbad8d8326bccecf6ed73ee285dfd6d3383f7bc4781c4e39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739565907417525293,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-371781,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d57bb9b871f9f17fd1371878a516e5,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b8738dd-a3d3-4efc-8c10-d0228f91ec19 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.978048737Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e): 2135952 bytes (100.00%)" file="server/image_pull.go:276" id=b689f362-20c2-4736-a226-cd136addc8bd name=/runtime.v1.ImageService/PullImage
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.978318064Z" level=debug msg="No compression detected" file="compression/compression.go:133"
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.978507699Z" level=debug msg="Compression change for blob sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30 (\"application/vnd.docker.container.image.v1+json\") not supported" file="copy/compression.go:91"
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.978548688Z" level=debug msg="Using original blob without modification" file="copy/compression.go:226"
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.978674601Z" level=debug msg="ImagePull (0): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 0 bytes (0.00%)" file="server/image_pull.go:276" id=b689f362-20c2-4736-a226-cd136addc8bd name=/runtime.v1.ImageService/PullImage
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.995301938Z" level=debug msg="ImagePull (2): docker.io/kicbase/echo-server:1.0 (sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30): 1197 bytes (100.00%)" file="server/image_pull.go:276" id=b689f362-20c2-4736-a226-cd136addc8bd name=/runtime.v1.ImageService/PullImage
	Feb 14 20:49:45 addons-371781 crio[663]: time="2025-02-14 20:49:45.995476370Z" level=debug msg="setting image creation date to 2022-07-10 23:15:54.185884751 +0000 UTC" file="storage/storage_dest.go:775"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f376776943ae5       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   25a5dfde136cd       nginx
	81aeafd1fdb26       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   8134cc8e6c5fd       busybox
	800cfa5e70812       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   be8a1c7910c6a       ingress-nginx-controller-56d7c84fd4-6b86h
	bce16a5862792       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   a08ba68b574c2       ingress-nginx-admission-patch-fcx7h
	745d7d97e38bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   21912f27fe680       ingress-nginx-admission-create-fdkfh
	d4ebdca261529       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   3793af901a894       amd-gpu-device-plugin-xpmjz
	cfb1402b98fe8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   d64f64c84820d       kube-ingress-dns-minikube
	e7dc5dc330493       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   24139eac0ac3e       storage-provisioner
	313b90425eca6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   50980b153b3ad       coredns-668d6bf9bc-krs7x
	e6e88c69a4923       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   2f4b4fd83f848       kube-proxy-5l22r
	e90cda7193285       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   888ef62e8fbcf       etcd-addons-371781
	741c0c8ab6266       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   64ee04002e523       kube-apiserver-addons-371781
	ddcfa75cb1ea3       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   a90176d663c8d       kube-controller-manager-addons-371781
	2f2b22157ff68       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   d6d900ea23180       kube-scheduler-addons-371781
	
	
	==> coredns [313b90425eca60dea2c64fd76adad1ffafe34a654bd8350d53543324fa465879] <==
	[INFO] 10.244.0.8:49862 - 24218 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000150698s
	[INFO] 10.244.0.8:49862 - 45712 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080177s
	[INFO] 10.244.0.8:49862 - 18902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000068439s
	[INFO] 10.244.0.8:49862 - 32224 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008639s
	[INFO] 10.244.0.8:49862 - 30666 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000054685s
	[INFO] 10.244.0.8:49862 - 32297 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000099705s
	[INFO] 10.244.0.8:49862 - 63132 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000065667s
	[INFO] 10.244.0.8:34653 - 61330 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010839s
	[INFO] 10.244.0.8:34653 - 61037 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084957s
	[INFO] 10.244.0.8:42342 - 24020 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082676s
	[INFO] 10.244.0.8:42342 - 23778 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072163s
	[INFO] 10.244.0.8:50143 - 54546 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076571s
	[INFO] 10.244.0.8:50143 - 54367 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000060258s
	[INFO] 10.244.0.8:49722 - 37097 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088072s
	[INFO] 10.244.0.8:49722 - 37279 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067133s
	[INFO] 10.244.0.23:54639 - 49898 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000372322s
	[INFO] 10.244.0.23:43149 - 10791 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000121953s
	[INFO] 10.244.0.23:51368 - 41234 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017491s
	[INFO] 10.244.0.23:44681 - 45903 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106766s
	[INFO] 10.244.0.23:56937 - 17352 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000078184s
	[INFO] 10.244.0.23:50266 - 38196 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057796s
	[INFO] 10.244.0.23:46079 - 3203 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003734785s
	[INFO] 10.244.0.23:35292 - 7712 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004501999s
	[INFO] 10.244.0.26:34376 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000396652s
	[INFO] 10.244.0.26:41455 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00026784s
	
	
	==> describe nodes <==
	Name:               addons-371781
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-371781
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=addons-371781
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T20_45_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-371781
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 20:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-371781
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 20:49:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 20:47:46 +0000   Fri, 14 Feb 2025 20:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 20:47:46 +0000   Fri, 14 Feb 2025 20:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 20:47:46 +0000   Fri, 14 Feb 2025 20:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 20:47:46 +0000   Fri, 14 Feb 2025 20:45:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    addons-371781
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6aa746a235f749959a99b783f172bc6c
	  System UUID:                6aa746a2-35f7-4995-9a99-b783f172bc6c
	  Boot ID:                    13d8fa72-0090-490d-93f5-95f0ec64c1a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-7d9564db4-jmdld              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-6b86h    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m21s
	  kube-system                 amd-gpu-device-plugin-xpmjz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 coredns-668d6bf9bc-krs7x                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m29s
	  kube-system                 etcd-addons-371781                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m34s
	  kube-system                 kube-apiserver-addons-371781                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-addons-371781        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-5l22r                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-371781                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m26s  kube-proxy       
	  Normal  Starting                 4m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s  kubelet          Node addons-371781 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s  kubelet          Node addons-371781 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s  kubelet          Node addons-371781 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m32s  kubelet          Node addons-371781 status is now: NodeReady
	  Normal  RegisteredNode           4m30s  node-controller  Node addons-371781 event: Registered Node addons-371781 in Controller
	
	
	==> dmesg <==
	[  +0.078070] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.047608] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.248422] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +4.754431] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.257047] kauditd_printk_skb: 126 callbacks suppressed
	[  +6.202689] kauditd_printk_skb: 98 callbacks suppressed
	[ +19.110453] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.368342] kauditd_printk_skb: 9 callbacks suppressed
	[Feb14 20:46] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.536620] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.134317] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.139279] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.140088] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.371634] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.186826] kauditd_printk_skb: 7 callbacks suppressed
	[Feb14 20:47] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.038374] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.521130] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.267596] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.123231] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.063618] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.152887] kauditd_printk_skb: 35 callbacks suppressed
	[  +9.793840] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.866268] kauditd_printk_skb: 7 callbacks suppressed
	[Feb14 20:48] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [e90cda719328509e5f6296a4bbb4e05dc570e1b323f783d95d035de43ea3ad64] <==
	{"level":"info","ts":"2025-02-14T20:45:43.019874Z","caller":"traceutil/trace.go:171","msg":"trace[945444532] linearizableReadLoop","detail":"{readStateIndex:882; appliedIndex:881; }","duration":"108.607704ms","start":"2025-02-14T20:45:42.911253Z","end":"2025-02-14T20:45:43.019861Z","steps":["trace[945444532] 'read index received'  (duration: 108.461944ms)","trace[945444532] 'applied index is now lower than readState.Index'  (duration: 145.331µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T20:45:43.019954Z","caller":"traceutil/trace.go:171","msg":"trace[140547252] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"330.885629ms","start":"2025-02-14T20:45:42.689063Z","end":"2025-02-14T20:45:43.019948Z","steps":["trace[140547252] 'process raft request'  (duration: 330.689188ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:45:43.020018Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:45:42.689042Z","time spent":"330.927902ms","remote":"127.0.0.1:39370","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:858 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-02-14T20:45:43.020146Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.902536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T20:45:43.020177Z","caller":"traceutil/trace.go:171","msg":"trace[1426764979] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:862; }","duration":"108.995574ms","start":"2025-02-14T20:45:42.911174Z","end":"2025-02-14T20:45:43.020170Z","steps":["trace[1426764979] 'agreement among raft nodes before linearized reading'  (duration: 108.951913ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:45:45.141317Z","caller":"traceutil/trace.go:171","msg":"trace[876878620] transaction","detail":"{read_only:false; response_revision:864; number_of_response:1; }","duration":"113.672748ms","start":"2025-02-14T20:45:45.027596Z","end":"2025-02-14T20:45:45.141269Z","steps":["trace[876878620] 'process raft request'  (duration: 113.415546ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:45:47.256941Z","caller":"traceutil/trace.go:171","msg":"trace[2097419831] transaction","detail":"{read_only:false; response_revision:866; number_of_response:1; }","duration":"105.942046ms","start":"2025-02-14T20:45:47.150986Z","end":"2025-02-14T20:45:47.256928Z","steps":["trace[2097419831] 'process raft request'  (duration: 105.756956ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:46:24.376673Z","caller":"traceutil/trace.go:171","msg":"trace[2076432097] transaction","detail":"{read_only:false; response_revision:1038; number_of_response:1; }","duration":"129.125555ms","start":"2025-02-14T20:46:24.247529Z","end":"2025-02-14T20:46:24.376655Z","steps":["trace[2076432097] 'process raft request'  (duration: 129.021376ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:46:32.829560Z","caller":"traceutil/trace.go:171","msg":"trace[389652814] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"186.381134ms","start":"2025-02-14T20:46:32.643157Z","end":"2025-02-14T20:46:32.829538Z","steps":["trace[389652814] 'process raft request'  (duration: 186.295721ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:47:05.379183Z","caller":"traceutil/trace.go:171","msg":"trace[1746823904] linearizableReadLoop","detail":"{readStateIndex:1314; appliedIndex:1313; }","duration":"307.037177ms","start":"2025-02-14T20:47:05.072124Z","end":"2025-02-14T20:47:05.379162Z","steps":["trace[1746823904] 'read index received'  (duration: 306.944619ms)","trace[1746823904] 'applied index is now lower than readState.Index'  (duration: 91.805µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T20:47:05.379374Z","caller":"traceutil/trace.go:171","msg":"trace[925341128] transaction","detail":"{read_only:false; response_revision:1276; number_of_response:1; }","duration":"334.330528ms","start":"2025-02-14T20:47:05.045036Z","end":"2025-02-14T20:47:05.379367Z","steps":["trace[925341128] 'process raft request'  (duration: 333.992946ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:47:05.379536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.449325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"warn","ts":"2025-02-14T20:47:05.379554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:47:05.045021Z","time spent":"334.373041ms","remote":"127.0.0.1:39486","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1212 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-02-14T20:47:05.379574Z","caller":"traceutil/trace.go:171","msg":"trace[734399857] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1276; }","duration":"289.49514ms","start":"2025-02-14T20:47:05.090070Z","end":"2025-02-14T20:47:05.379565Z","steps":["trace[734399857] 'agreement among raft nodes before linearized reading'  (duration: 289.394844ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:47:05.379686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.552218ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-02-14T20:47:05.379702Z","caller":"traceutil/trace.go:171","msg":"trace[1113786297] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1276; }","duration":"307.592913ms","start":"2025-02-14T20:47:05.072103Z","end":"2025-02-14T20:47:05.379696Z","steps":["trace[1113786297] 'agreement among raft nodes before linearized reading'  (duration: 307.558987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:47:05.379715Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:47:05.072091Z","time spent":"307.619931ms","remote":"127.0.0.1:39470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":29,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"info","ts":"2025-02-14T20:47:06.549172Z","caller":"traceutil/trace.go:171","msg":"trace[749691732] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"100.795961ms","start":"2025-02-14T20:47:06.448363Z","end":"2025-02-14T20:47:06.549159Z","steps":["trace[749691732] 'process raft request'  (duration: 100.686862ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:47:19.222289Z","caller":"traceutil/trace.go:171","msg":"trace[941405911] transaction","detail":"{read_only:false; response_revision:1426; number_of_response:1; }","duration":"139.135014ms","start":"2025-02-14T20:47:19.083140Z","end":"2025-02-14T20:47:19.222275Z","steps":["trace[941405911] 'process raft request'  (duration: 138.967746ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:47:19.500242Z","caller":"traceutil/trace.go:171","msg":"trace[68878849] linearizableReadLoop","detail":"{readStateIndex:1472; appliedIndex:1471; }","duration":"233.245389ms","start":"2025-02-14T20:47:19.266936Z","end":"2025-02-14T20:47:19.500182Z","steps":["trace[68878849] 'read index received'  (duration: 228.718369ms)","trace[68878849] 'applied index is now lower than readState.Index'  (duration: 4.526565ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T20:47:19.500323Z","caller":"traceutil/trace.go:171","msg":"trace[484551913] transaction","detail":"{read_only:false; response_revision:1427; number_of_response:1; }","duration":"268.912153ms","start":"2025-02-14T20:47:19.231404Z","end":"2025-02-14T20:47:19.500317Z","steps":["trace[484551913] 'process raft request'  (duration: 264.265113ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:47:19.500358Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.575487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1069"}
	{"level":"info","ts":"2025-02-14T20:47:19.500389Z","caller":"traceutil/trace.go:171","msg":"trace[2137594856] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1427; }","duration":"213.63261ms","start":"2025-02-14T20:47:19.286749Z","end":"2025-02-14T20:47:19.500381Z","steps":["trace[2137594856] 'agreement among raft nodes before linearized reading'  (duration: 213.574132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:47:19.500543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.622564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-02-14T20:47:19.500557Z","caller":"traceutil/trace.go:171","msg":"trace[1527220727] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1427; }","duration":"233.656629ms","start":"2025-02-14T20:47:19.266896Z","end":"2025-02-14T20:47:19.500553Z","steps":["trace[1527220727] 'agreement among raft nodes before linearized reading'  (duration: 233.595484ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:49:46 up 5 min,  0 users,  load average: 0.36, 0.74, 0.39
	Linux addons-371781 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [741c0c8ab6266303028344ae2cb4b4561c4025ab295027c8e1be1d419cdef301] <==
	E0214 20:45:58.488936       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.151.162:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.151.162:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.151.162:443: connect: connection refused" logger="UnhandledError"
	E0214 20:45:58.495414       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.151.162:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.151.162:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.151.162:443: connect: connection refused" logger="UnhandledError"
	I0214 20:45:58.591399       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0214 20:46:52.392417       1 conn.go:339] Error on socket receive: read tcp 192.168.39.67:8443->192.168.39.1:36072: use of closed network connection
	E0214 20:46:52.571046       1 conn.go:339] Error on socket receive: read tcp 192.168.39.67:8443->192.168.39.1:36082: use of closed network connection
	I0214 20:47:01.720314       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.135.44"}
	I0214 20:47:12.626341       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0214 20:47:13.779070       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0214 20:47:24.778490       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 20:47:24.953772       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.251.188"}
	I0214 20:47:29.265851       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0214 20:47:46.775646       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0214 20:47:51.952641       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 20:47:51.952697       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 20:47:51.976489       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 20:47:51.976603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 20:47:52.052852       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 20:47:52.053136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0214 20:47:52.123333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0214 20:47:52.123714       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0214 20:47:53.124580       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0214 20:47:53.124665       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0214 20:47:53.166260       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0214 20:47:59.504265       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0214 20:49:44.759319       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.168.35"}
	
	
	==> kube-controller-manager [ddcfa75cb1ea302af043b4236370acb1b0adb0a8e11d80018e4bb79dbc1e25ce] <==
	W0214 20:48:32.337951       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:48:32.338007       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 20:48:37.172525       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 20:48:37.173620       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0214 20:48:37.174416       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:48:37.174472       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 20:48:53.200824       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 20:48:53.201889       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0214 20:48:53.202839       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:48:53.202879       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 20:49:07.803993       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 20:49:07.805351       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0214 20:49:07.806121       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:49:07.806293       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 20:49:16.022268       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 20:49:16.023026       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0214 20:49:16.023762       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:49:16.023812       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0214 20:49:16.598813       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0214 20:49:16.599515       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0214 20:49:16.600161       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 20:49:16.600182       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0214 20:49:44.617603       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="40.872978ms"
	I0214 20:49:44.642776       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="25.037068ms"
	I0214 20:49:44.642993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="97.795µs"
	
	
	==> kube-proxy [e6e88c69a4923b7ba3b1757db24803ac293ea54593cd58ce109e2edd2f55cce6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0214 20:45:20.019467       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0214 20:45:20.039226       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	E0214 20:45:20.039280       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 20:45:20.170418       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0214 20:45:20.170455       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0214 20:45:20.170478       1 server_linux.go:170] "Using iptables Proxier"
	I0214 20:45:20.173344       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 20:45:20.173586       1 server.go:497] "Version info" version="v1.32.1"
	I0214 20:45:20.173596       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 20:45:20.175641       1 config.go:199] "Starting service config controller"
	I0214 20:45:20.175655       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 20:45:20.175672       1 config.go:105] "Starting endpoint slice config controller"
	I0214 20:45:20.175675       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 20:45:20.175706       1 config.go:329] "Starting node config controller"
	I0214 20:45:20.175709       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 20:45:20.276572       1 shared_informer.go:320] Caches are synced for node config
	I0214 20:45:20.276597       1 shared_informer.go:320] Caches are synced for service config
	I0214 20:45:20.276622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2f2b22157ff68dee6d062c660d2159ab320eb2d194664da55c453f68bb35ab18] <==
	E0214 20:45:10.095157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:10.097403       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 20:45:10.099468       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0214 20:45:10.097886       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:10.097770       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 20:45:10.106388       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:10.097803       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 20:45:10.106501       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:10.951858       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 20:45:10.951936       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.066101       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 20:45:11.066161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.071367       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 20:45:11.071410       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.085242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 20:45:11.085303       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.091171       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 20:45:11.091355       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0214 20:45:11.152833       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 20:45:11.152872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.232573       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0214 20:45:11.232615       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0214 20:45:11.242138       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 20:45:11.242177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0214 20:45:14.166296       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 20:49:12 addons-371781 kubelet[1222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 14 20:49:12 addons-371781 kubelet[1222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 14 20:49:12 addons-371781 kubelet[1222]: E0214 20:49:12.915052    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566152914764163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:12 addons-371781 kubelet[1222]: E0214 20:49:12.915092    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566152914764163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:20 addons-371781 kubelet[1222]: I0214 20:49:20.688844    1222 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 14 20:49:22 addons-371781 kubelet[1222]: E0214 20:49:22.917405    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566162916959526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:22 addons-371781 kubelet[1222]: E0214 20:49:22.917423    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566162916959526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:32 addons-371781 kubelet[1222]: E0214 20:49:32.920132    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566172919905671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:32 addons-371781 kubelet[1222]: E0214 20:49:32.920178    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566172919905671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:38 addons-371781 kubelet[1222]: I0214 20:49:38.689029    1222 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xpmjz" secret="" err="secret \"gcp-auth\" not found"
	Feb 14 20:49:42 addons-371781 kubelet[1222]: E0214 20:49:42.921857    1222 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566182921569071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:42 addons-371781 kubelet[1222]: E0214 20:49:42.922134    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566182921569071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612791    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="node-driver-registrar"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612908    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="hostpath"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612918    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="13aeb75a-098e-4e94-881e-eb106c264c84" containerName="csi-attacher"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612924    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="csi-external-health-monitor-controller"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612930    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="4048a80d-0b2c-4ee5-ae9b-3d2f1d29207d" containerName="volume-snapshot-controller"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612935    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="liveness-probe"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.612941    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="csi-snapshotter"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.613006    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="b0d7d98a-1132-42ae-9f8e-944483d54a01" containerName="volume-snapshot-controller"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.613011    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="2898c65a-96f2-4593-9661-069a1592a047" containerName="csi-resizer"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.613017    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="69931797-69ec-4830-9298-cc4acc3c98cf" containerName="csi-provisioner"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.613022    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="8cadfe29-48f3-4470-926a-6d8f732fc2f8" containerName="local-path-provisioner"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.613027    1222 memory_manager.go:355] "RemoveStaleState removing state" podUID="78fe9a7a-bdc3-4f37-9285-1ed4b7378123" containerName="task-pv-container"
	Feb 14 20:49:44 addons-371781 kubelet[1222]: I0214 20:49:44.762491    1222 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5b49\" (UniqueName: \"kubernetes.io/projected/30ac889b-3857-4f1d-be64-3ad9f57b9a8a-kube-api-access-b5b49\") pod \"hello-world-app-7d9564db4-jmdld\" (UID: \"30ac889b-3857-4f1d-be64-3ad9f57b9a8a\") " pod="default/hello-world-app-7d9564db4-jmdld"
	
	
	==> storage-provisioner [e7dc5dc33049319651a21dce9dcd840ed5a4d0e3c77c661e9e1c754c60ecebe4] <==
	I0214 20:45:24.538427       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 20:45:24.559925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 20:45:24.560144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 20:45:24.574797       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 20:45:24.574886       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-371781_6b46a2ee-6e54-4200-8fb4-3f327ec3a80a!
	I0214 20:45:24.576818       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a45aa7de-295f-41ef-88e0-792db9e33752", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-371781_6b46a2ee-6e54-4200-8fb4-3f327ec3a80a became leader
	I0214 20:45:24.679634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-371781_6b46a2ee-6e54-4200-8fb4-3f327ec3a80a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-371781 -n addons-371781
helpers_test.go:261: (dbg) Run:  kubectl --context addons-371781 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-fdkfh ingress-nginx-admission-patch-fcx7h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-371781 describe pod ingress-nginx-admission-create-fdkfh ingress-nginx-admission-patch-fcx7h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-371781 describe pod ingress-nginx-admission-create-fdkfh ingress-nginx-admission-patch-fcx7h: exit status 1 (55.008501ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fdkfh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fcx7h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-371781 describe pod ingress-nginx-admission-create-fdkfh ingress-nginx-admission-patch-fcx7h: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable ingress-dns --alsologtostderr -v=1: (1.025648002s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable ingress --alsologtostderr -v=1: (7.649168174s)
--- FAIL: TestAddons/parallel/Ingress (151.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (202.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0bddb02e-1c49-4cbc-ac4d-bd8db4393502] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002656338s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-471578 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-471578 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-471578 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471578 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6a119b22-2655-46aa-917a-9402321a3635] Pending
helpers_test.go:344: "sp-pod" [6a119b22-2655-46aa-917a-9402321a3635] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6a119b22-2655-46aa-917a-9402321a3635] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004061658s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-471578 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-471578 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-471578 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92834a68-3db1-4765-8eee-b33896b24b1b] Pending
helpers_test.go:344: "sp-pod" [92834a68-3db1-4765-8eee-b33896b24b1b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-471578 -n functional-471578
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-14 20:58:34.050613963 +0000 UTC m=+859.755182896
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-471578 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-471578 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-471578/192.168.39.172
Start Time:       Fri, 14 Feb 2025 20:55:33 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh8pc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vh8pc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  3m    default-scheduler  Successfully assigned default/sp-pod to functional-471578
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-471578 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-471578 logs sp-pod -n default: exit status 1 (63.702973ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-471578 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-471578 -n functional-471578
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 logs -n 25: (1.357165867s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-471578 image save kicbase/echo-server:functional-471578      | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-471578 image rm                                              | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | kicbase/echo-server:functional-471578                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| service        | functional-471578 service                                               | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | hello-node-connect --url                                                |                   |         |         |                     |                     |
	| image          | functional-471578 image ls                                              | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/ssl/certs/250783.pem                                               |                   |         |         |                     |                     |
	| image          | functional-471578 image load                                            | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /usr/share/ca-certificates/250783.pem                                   |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/test/nested/copy/250783/hosts                                      |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/ssl/certs/51391683.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/ssl/certs/2507832.pem                                              |                   |         |         |                     |                     |
	| image          | functional-471578 image ls                                              | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /usr/share/ca-certificates/2507832.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh sudo cat                                          | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                               |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh echo                                              | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | hello                                                                   |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh cat                                               | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | /etc/hostname                                                           |                   |         |         |                     |                     |
	| update-context | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-471578 ssh pgrep                                             | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-471578 image build -t                                        | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | localhost/my-image:functional-471578                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-471578                                                       | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-471578 image ls                                              | functional-471578 | jenkins | v1.35.0 | 14 Feb 25 20:55 UTC | 14 Feb 25 20:55 UTC |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 20:55:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 20:55:16.281154  257849 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:55:16.281410  257849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:16.281442  257849 out.go:358] Setting ErrFile to fd 2...
	I0214 20:55:16.281458  257849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:16.281750  257849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 20:55:16.282300  257849 out.go:352] Setting JSON to false
	I0214 20:55:16.283250  257849 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5860,"bootTime":1739560656,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:55:16.283359  257849 start.go:140] virtualization: kvm guest
	I0214 20:55:16.285249  257849 out.go:177] * [functional-471578] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 20:55:16.286777  257849 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 20:55:16.286794  257849 notify.go:220] Checking for updates...
	I0214 20:55:16.289162  257849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:55:16.290448  257849 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:55:16.291591  257849 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:55:16.292804  257849 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 20:55:16.293760  257849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 20:55:16.295351  257849 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:55:16.295913  257849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:16.295982  257849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.313105  257849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0214 20:55:16.313769  257849 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.314513  257849 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.314560  257849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.314872  257849 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.315042  257849 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.315256  257849 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:55:16.315607  257849 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:16.315640  257849 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.330801  257849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
	I0214 20:55:16.331218  257849 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.331862  257849 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.331891  257849 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.332288  257849 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.332515  257849 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.365584  257849 out.go:177] * Using the kvm2 driver based on existing profile
	I0214 20:55:16.366579  257849 start.go:304] selected driver: kvm2
	I0214 20:55:16.366593  257849 start.go:908] validating driver "kvm2" against &{Name:functional-471578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:functional-471578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:55:16.366757  257849 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 20:55:16.367760  257849 cni.go:84] Creating CNI manager for ""
	I0214 20:55:16.367824  257849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:55:16.367877  257849 start.go:347] cluster config:
	{Name:functional-471578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-471578 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:55:16.369609  257849 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.870378990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2779a8c-0275-4c11-8c85-f4de224f7b77 name=/runtime.v1.RuntimeService/Version
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.871314187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a738f6c-e2ae-40d9-9fc3-6650df47eb42 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.871996609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566714871977679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a738f6c-e2ae-40d9-9fc3-6650df47eb42 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.872649204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4c486eb-3b38-494d-8d6d-a1dc53a5c47d name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.872714729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4c486eb-3b38-494d-8d6d-a1dc53a5c47d name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.873075443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4a67ca596fa74b43713ef1750789b976356c57e70ca91d3e6eb6ad439596d75,PodSandboxId:439a5df89d93f8fd661794723d2963018251fd846565cafaa17c683380f70997,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1739566548258877974,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-747xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a90e7a5a-b398-44ce-93f1-bffccab1a52a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c58791c7d9f1eccf5d18b719bb58116527d9827ab747da2926ca2f67568518c,PodSandboxId:48c03558dab4bee601062561a54929ff7b670f203b3cf6f0d26e30516ca440ff,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1739566531984246837,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hnktj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3c5198ea-
e2ae-48e3-b4d5-5cdf501f05c8,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca53617bc5546991c61b0f56042be951d12a3347fb5c7e19eb4f464daff315,PodSandboxId:5567c700f7d1046be507f86704f21146be12088462f985090283bda50383cda5,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1739566527548243098,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-hfj77,io.kuber
netes.pod.namespace: default,io.kubernetes.pod.uid: fa3fc291-1d4f-4383-8342-5159912789e8,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16924a73642331123142c5444ccc89ab4d4d09a80bf8bd4145f447467aaa3893,PodSandboxId:8f1c76c2b96203e2dd3aa18c8db1c9c5b27f347a4a8b7429106dd3501f54cfe3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1739566522229430975,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kuberne
tes.pod.name: dashboard-metrics-scraper-5d59dccf9b-tmt8b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8a27440e-6e7c-416f-8e80-bf8bac2eba4f,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6787fad19b80b5fe428eb2b51df99dfe33e04b50553896add4d20d3bcb68e7,PodSandboxId:eaefb7c1fb746b2d91792c5e92517eb7480971d30d8aa59876df7056f213e9cb,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAI
NER_EXITED,CreatedAt:1739566519924193947,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c09ee777-fe60-42ad-aaab-ed0413070a94,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cbe8872ae2e2662e02035719c21d19b111ec118ab51a8d282406858a21012b,PodSandboxId:3757748a039486700c3bff9676adb078ee19782b9d67aed25d8d150a97bb9892,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_R
UNNING,CreatedAt:1739566517559766878,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-8pkq2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f45186a-94f5-420e-a40e-3c3aab735c45,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ef33c4de4d402cf1a15eb56db9b924ad4da758dd4e2aa476ae9c70b778357b,PodSandboxId:3d79caed29292ed849e9de3cd7c92025facf3c218d812f48f8622130d10e7b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739566492840
275631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc81dbd56c4ec7ecffc54a4fcbae34e9a41edeadf370a056c5efc905039a16e4,PodSandboxId:779abdec562505da770bcb70adbdc54e1e81fa3e85c97fa7d03ecdd865c5452b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded0
87897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739566492535005549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6897bec9e0153f069d5189fa970587df58277ba4d5eef345a1a504996e31413b,PodSandboxId:81613830c853aecdfa6ac7d250e335ed605defad04e2535bb68d3009ae1b7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739566492558370261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a797c0f02d0cbdcb4ffca691447918713640c2f1afd9cd72f7cc8f2520c0b,PodSandboxId:2a44e6dba5e8183de6dd11630ecedb30fef4e07e3b140b3259f24f53fe87b5b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739566488836217410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a4fa6e91fa06e0f374243bc33b72fa46a5b15a1ca1a075b8f0b21f025d8134,PodSandboxId:5a02c36c0053b4a363b5973e26c58868bfbc014f6c97e94aceea9cfa3fb63b2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739566488780921951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d41d965366fad4a2eae97d1b2c5ace064fced44262849fdc2e4b4338c318fef,PodSandboxId:e652a3aa798e2fb14bd9e41f86a441cfdb82696a2472f0b036c5eaeb18e482da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739566488789449345,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c416bb85de0f13e7d3a78fbc4fcfa76,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533c683e67f628887483aa7b0805f7ce209dd173c5803d3813f1943c8f7d50b,PodSandboxId:e2399b83d18ca592323fbd2bdcdb017139afbae1469afaa061455ea3b25c78ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739566488702191516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994dd91a124ad714ede1b4c05c50ddde3d7ea84ba27cc1a1ecc12405487fe398,PodSandboxId:80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d
96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739566449683136868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2286f9d71df7def774c10528d4cd36543fc71b90be5b94168ea01a057f2c38,PodSandboxId:384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739566449670796063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7bd48c11c9eb659ecfc8997021cbf82bab18dafb8e001e55683715c02c0513,PodSandboxId:3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa
1b23d1,State:CONTAINER_EXITED,CreatedAt:1739566445880372535,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108ec1056a657dfb698f9c131cdf1d6c6782572019a77276b18d11be574c241,PodSandboxId:216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d3
5,State:CONTAINER_EXITED,CreatedAt:1739566445868577919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa9fc593724cb25843c253b472359eadf6e8cdd39c3c4276561a205792108a48,PodSandboxId:aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State
:CONTAINER_EXITED,CreatedAt:1739566445857389130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ac1dc15f0c1b85608bf44bd981d38453d4c505b176dfef3edd50ed4e48bed0,PodSandboxId:ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739566432161
128044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4c486eb-3b38-494d-8d6d-a1dc53a5c47d name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.902367065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c82d454a-fee7-4481-8872-4592747c9d14 name=/runtime.v1.RuntimeService/Version
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.902418631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c82d454a-fee7-4481-8872-4592747c9d14 name=/runtime.v1.RuntimeService/Version
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.903185635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b6886e0-9df4-46d1-8a20-34ccfeca2393 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.904116727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566714904098174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b6886e0-9df4-46d1-8a20-34ccfeca2393 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.904615391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd3f16cd-3947-4803-847a-65b97e3f7038 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.904675099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd3f16cd-3947-4803-847a-65b97e3f7038 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.905020679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4a67ca596fa74b43713ef1750789b976356c57e70ca91d3e6eb6ad439596d75,PodSandboxId:439a5df89d93f8fd661794723d2963018251fd846565cafaa17c683380f70997,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1739566548258877974,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-747xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a90e7a5a-b398-44ce-93f1-bffccab1a52a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c58791c7d9f1eccf5d18b719bb58116527d9827ab747da2926ca2f67568518c,PodSandboxId:48c03558dab4bee601062561a54929ff7b670f203b3cf6f0d26e30516ca440ff,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1739566531984246837,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hnktj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3c5198ea-
e2ae-48e3-b4d5-5cdf501f05c8,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca53617bc5546991c61b0f56042be951d12a3347fb5c7e19eb4f464daff315,PodSandboxId:5567c700f7d1046be507f86704f21146be12088462f985090283bda50383cda5,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1739566527548243098,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-hfj77,io.kuber
netes.pod.namespace: default,io.kubernetes.pod.uid: fa3fc291-1d4f-4383-8342-5159912789e8,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16924a73642331123142c5444ccc89ab4d4d09a80bf8bd4145f447467aaa3893,PodSandboxId:8f1c76c2b96203e2dd3aa18c8db1c9c5b27f347a4a8b7429106dd3501f54cfe3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1739566522229430975,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kuberne
tes.pod.name: dashboard-metrics-scraper-5d59dccf9b-tmt8b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8a27440e-6e7c-416f-8e80-bf8bac2eba4f,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6787fad19b80b5fe428eb2b51df99dfe33e04b50553896add4d20d3bcb68e7,PodSandboxId:eaefb7c1fb746b2d91792c5e92517eb7480971d30d8aa59876df7056f213e9cb,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAI
NER_EXITED,CreatedAt:1739566519924193947,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c09ee777-fe60-42ad-aaab-ed0413070a94,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cbe8872ae2e2662e02035719c21d19b111ec118ab51a8d282406858a21012b,PodSandboxId:3757748a039486700c3bff9676adb078ee19782b9d67aed25d8d150a97bb9892,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_R
UNNING,CreatedAt:1739566517559766878,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-8pkq2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f45186a-94f5-420e-a40e-3c3aab735c45,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ef33c4de4d402cf1a15eb56db9b924ad4da758dd4e2aa476ae9c70b778357b,PodSandboxId:3d79caed29292ed849e9de3cd7c92025facf3c218d812f48f8622130d10e7b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739566492840
275631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc81dbd56c4ec7ecffc54a4fcbae34e9a41edeadf370a056c5efc905039a16e4,PodSandboxId:779abdec562505da770bcb70adbdc54e1e81fa3e85c97fa7d03ecdd865c5452b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded0
87897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739566492535005549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6897bec9e0153f069d5189fa970587df58277ba4d5eef345a1a504996e31413b,PodSandboxId:81613830c853aecdfa6ac7d250e335ed605defad04e2535bb68d3009ae1b7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739566492558370261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a797c0f02d0cbdcb4ffca691447918713640c2f1afd9cd72f7cc8f2520c0b,PodSandboxId:2a44e6dba5e8183de6dd11630ecedb30fef4e07e3b140b3259f24f53fe87b5b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739566488836217410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a4fa6e91fa06e0f374243bc33b72fa46a5b15a1ca1a075b8f0b21f025d8134,PodSandboxId:5a02c36c0053b4a363b5973e26c58868bfbc014f6c97e94aceea9cfa3fb63b2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739566488780921951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d41d965366fad4a2eae97d1b2c5ace064fced44262849fdc2e4b4338c318fef,PodSandboxId:e652a3aa798e2fb14bd9e41f86a441cfdb82696a2472f0b036c5eaeb18e482da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739566488789449345,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c416bb85de0f13e7d3a78fbc4fcfa76,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533c683e67f628887483aa7b0805f7ce209dd173c5803d3813f1943c8f7d50b,PodSandboxId:e2399b83d18ca592323fbd2bdcdb017139afbae1469afaa061455ea3b25c78ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739566488702191516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994dd91a124ad714ede1b4c05c50ddde3d7ea84ba27cc1a1ecc12405487fe398,PodSandboxId:80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d
96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739566449683136868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2286f9d71df7def774c10528d4cd36543fc71b90be5b94168ea01a057f2c38,PodSandboxId:384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739566449670796063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7bd48c11c9eb659ecfc8997021cbf82bab18dafb8e001e55683715c02c0513,PodSandboxId:3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa
1b23d1,State:CONTAINER_EXITED,CreatedAt:1739566445880372535,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108ec1056a657dfb698f9c131cdf1d6c6782572019a77276b18d11be574c241,PodSandboxId:216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d3
5,State:CONTAINER_EXITED,CreatedAt:1739566445868577919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa9fc593724cb25843c253b472359eadf6e8cdd39c3c4276561a205792108a48,PodSandboxId:aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State
:CONTAINER_EXITED,CreatedAt:1739566445857389130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ac1dc15f0c1b85608bf44bd981d38453d4c505b176dfef3edd50ed4e48bed0,PodSandboxId:ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739566432161
128044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd3f16cd-3947-4803-847a-65b97e3f7038 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.911983938Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dd7cce4b-d1f1-47aa-a6a8-9a1b5881ac80 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.912986975Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:439a5df89d93f8fd661794723d2963018251fd846565cafaa17c683380f70997,Metadata:&PodSandboxMetadata{Name:mysql-58ccfd96bb-747xg,Uid:a90e7a5a-b398-44ce-93f1-bffccab1a52a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739566536358222872,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-58ccfd96bb-747xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a90e7a5a-b398-44ce-93f1-bffccab1a52a,pod-template-hash: 58ccfd96bb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:36.038218458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5567c700f7d1046be507f86704f21146be12088462f985090283bda50383cda5,Metadata:&PodSandboxMetadata{Name:hello-node-connect-58f9cf68d8-hfj77,Uid:fa3fc291-1d4f-4383-8342-5159912789e8,Namespace:default,Attemp
t:0,},State:SANDBOX_READY,CreatedAt:1739566527218591226,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-hfj77,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa3fc291-1d4f-4383-8342-5159912789e8,pod-template-hash: 58f9cf68d8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:25.956862529Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:48c03558dab4bee601062561a54929ff7b670f203b3cf6f0d26e30516ca440ff,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-7779f9b69b-hnktj,Uid:3c5198ea-e2ae-48e3-b4d5-5cdf501f05c8,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739566520130433500,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hnktj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3c5198ea-e2ae-48e3-b4d5-5cdf501f05c8,k8s-app: ku
bernetes-dashboard,pod-template-hash: 7779f9b69b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:18.018299193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f1c76c2b96203e2dd3aa18c8db1c9c5b27f347a4a8b7429106dd3501f54cfe3,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-5d59dccf9b-tmt8b,Uid:8a27440e-6e7c-416f-8e80-bf8bac2eba4f,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739566520110263056,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-5d59dccf9b-tmt8b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8a27440e-6e7c-416f-8e80-bf8bac2eba4f,k8s-app: dashboard-metrics-scraper,pod-template-hash: 5d59dccf9b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:18.003754065Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:eaefb7c1fb746b2d91792c5e
92517eb7480971d30d8aa59876df7056f213e9cb,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:c09ee777-fe60-42ad-aaab-ed0413070a94,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1739566517445777166,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c09ee777-fe60-42ad-aaab-ed0413070a94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:16.397820265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3757748a039486700c3bff9676adb078ee19782b9d67aed25d8d150a97bb9892,Metadata:&PodSandboxMetadata{Name:hello-node-fcfd88b6f-8pkq2,Uid:1f45186a-94f5-420e-a40e-3c3aab735c45,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739566513775556104,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-fcfd88b6f-8pkq2,io.kubernetes.pod.namespace: default,io.kubernetes.pod
.uid: 1f45186a-94f5-420e-a40e-3c3aab735c45,pod-template-hash: fcfd88b6f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:55:13.462637410Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d79caed29292ed849e9de3cd7c92025facf3c218d812f48f8622130d10e7b7a,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-bqb5l,Uid:c32cdd61-218a-4b2c-a211-e394ec28e134,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566492505579759,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:54:52.052324435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:779abdec562505da770bcb70adbdc54e1e81fa3e85c97fa7d03ecdd865c5452b,Metadata:&PodSandboxMetadata{Name:kube-proxy-kf2lm,Uid:06c6827b-b
00d-4b5e-abc7-cf2bab7309bf,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566492380103083,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:54:52.052333320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81613830c853aecdfa6ac7d250e335ed605defad04e2535bb68d3009ae1b7b5d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0bddb02e-1c49-4cbc-ac4d-bd8db4393502,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566492376998815,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-14T20:54:52.052335809Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a44e6dba5e8183de6dd11630ecedb30fef4e07e3b140b3259f24f53fe87b5b0,Metadata:&PodSandboxMetadata{N
ame:etcd-functional-471578,Uid:a42ff0becb25fc702e0745d4cc25fdbd,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566488563341675,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.172:2379,kubernetes.io/config.hash: a42ff0becb25fc702e0745d4cc25fdbd,kubernetes.io/config.seen: 2025-02-14T20:54:48.055541844Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a02c36c0053b4a363b5973e26c58868bfbc014f6c97e94aceea9cfa3fb63b2a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-471578,Uid:1958b59b654725d1613da743048bc13e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566488540677284,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.cont
ainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1958b59b654725d1613da743048bc13e,kubernetes.io/config.seen: 2025-02-14T20:54:48.055543663Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e652a3aa798e2fb14bd9e41f86a441cfdb82696a2472f0b036c5eaeb18e482da,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-471578,Uid:1c416bb85de0f13e7d3a78fbc4fcfa76,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739566488529605072,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c416bb85de0f13e7d3a78fbc4fcfa76,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoin
t: 192.168.39.172:8441,kubernetes.io/config.hash: 1c416bb85de0f13e7d3a78fbc4fcfa76,kubernetes.io/config.seen: 2025-02-14T20:54:48.055542914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2399b83d18ca592323fbd2bdcdb017139afbae1469afaa061455ea3b25c78ef,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-471578,Uid:03875c89b3a9dfb8a5a672ab1d6ca8d7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1739566488526569296,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03875c89b3a9dfb8a5a672ab1d6ca8d7,kubernetes.io/config.seen: 2025-02-14T20:54:48.055538835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac,Metadata:&PodSandb
oxMetadata{Name:coredns-668d6bf9bc-bqb5l,Uid:c32cdd61-218a-4b2c-a211-e394ec28e134,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566431301516357,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:53:05.850400885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a,Metadata:&PodSandboxMetadata{Name:kube-proxy-kf2lm,Uid:06c6827b-b00d-4b5e-abc7-cf2bab7309bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566431087818504,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T20:53:05.810195179Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927,Metadata:&PodSandboxMetadata{Name:etcd-functional-471578,Uid:a42ff0becb25fc702e0745d4cc25fdbd,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566431078017733,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.172:2379,kubernetes.io/config.hash: a42ff0becb25fc702e0745d4cc25fdbd,kubernetes.io/config.seen: 2025-02-14T20:53:00.698874816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbo
x{Id:3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-471578,Uid:03875c89b3a9dfb8a5a672ab1d6ca8d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566431042990140,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 03875c89b3a9dfb8a5a672ab1d6ca8d7,kubernetes.io/config.seen: 2025-02-14T20:53:00.698871448Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-471578,Uid:1958b59b654725d1613da743048bc13e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566431004747610,Labels:map[string]st
ring{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1958b59b654725d1613da743048bc13e,kubernetes.io/config.seen: 2025-02-14T20:53:00.698876712Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0bddb02e-1c49-4cbc-ac4d-bd8db4393502,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1739566430978378267,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:ma
p[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-14T20:53:06.931686854Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dd7cce4b-d1f1-47aa-a6a8-9a1b5881ac80 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.914325535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e861027d-e2c1-4a42-8704-ab53c50eeb93 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.914374267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e861027d-e2c1-4a42-8704-ab53c50eeb93 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.914685207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4a67ca596fa74b43713ef1750789b976356c57e70ca91d3e6eb6ad439596d75,PodSandboxId:439a5df89d93f8fd661794723d2963018251fd846565cafaa17c683380f70997,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1739566548258877974,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-747xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a90e7a5a-b398-44ce-93f1-bffccab1a52a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c58791c7d9f1eccf5d18b719bb58116527d9827ab747da2926ca2f67568518c,PodSandboxId:48c03558dab4bee601062561a54929ff7b670f203b3cf6f0d26e30516ca440ff,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1739566531984246837,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hnktj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3c5198ea-
e2ae-48e3-b4d5-5cdf501f05c8,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca53617bc5546991c61b0f56042be951d12a3347fb5c7e19eb4f464daff315,PodSandboxId:5567c700f7d1046be507f86704f21146be12088462f985090283bda50383cda5,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1739566527548243098,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-hfj77,io.kuber
netes.pod.namespace: default,io.kubernetes.pod.uid: fa3fc291-1d4f-4383-8342-5159912789e8,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16924a73642331123142c5444ccc89ab4d4d09a80bf8bd4145f447467aaa3893,PodSandboxId:8f1c76c2b96203e2dd3aa18c8db1c9c5b27f347a4a8b7429106dd3501f54cfe3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1739566522229430975,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kuberne
tes.pod.name: dashboard-metrics-scraper-5d59dccf9b-tmt8b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8a27440e-6e7c-416f-8e80-bf8bac2eba4f,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6787fad19b80b5fe428eb2b51df99dfe33e04b50553896add4d20d3bcb68e7,PodSandboxId:eaefb7c1fb746b2d91792c5e92517eb7480971d30d8aa59876df7056f213e9cb,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAI
NER_EXITED,CreatedAt:1739566519924193947,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c09ee777-fe60-42ad-aaab-ed0413070a94,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cbe8872ae2e2662e02035719c21d19b111ec118ab51a8d282406858a21012b,PodSandboxId:3757748a039486700c3bff9676adb078ee19782b9d67aed25d8d150a97bb9892,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_R
UNNING,CreatedAt:1739566517559766878,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-8pkq2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f45186a-94f5-420e-a40e-3c3aab735c45,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ef33c4de4d402cf1a15eb56db9b924ad4da758dd4e2aa476ae9c70b778357b,PodSandboxId:3d79caed29292ed849e9de3cd7c92025facf3c218d812f48f8622130d10e7b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739566492840
275631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc81dbd56c4ec7ecffc54a4fcbae34e9a41edeadf370a056c5efc905039a16e4,PodSandboxId:779abdec562505da770bcb70adbdc54e1e81fa3e85c97fa7d03ecdd865c5452b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded0
87897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739566492535005549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6897bec9e0153f069d5189fa970587df58277ba4d5eef345a1a504996e31413b,PodSandboxId:81613830c853aecdfa6ac7d250e335ed605defad04e2535bb68d3009ae1b7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739566492558370261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a797c0f02d0cbdcb4ffca691447918713640c2f1afd9cd72f7cc8f2520c0b,PodSandboxId:2a44e6dba5e8183de6dd11630ecedb30fef4e07e3b140b3259f24f53fe87b5b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739566488836217410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a4fa6e91fa06e0f374243bc33b72fa46a5b15a1ca1a075b8f0b21f025d8134,PodSandboxId:5a02c36c0053b4a363b5973e26c58868bfbc014f6c97e94aceea9cfa3fb63b2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739566488780921951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d41d965366fad4a2eae97d1b2c5ace064fced44262849fdc2e4b4338c318fef,PodSandboxId:e652a3aa798e2fb14bd9e41f86a441cfdb82696a2472f0b036c5eaeb18e482da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739566488789449345,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c416bb85de0f13e7d3a78fbc4fcfa76,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533c683e67f628887483aa7b0805f7ce209dd173c5803d3813f1943c8f7d50b,PodSandboxId:e2399b83d18ca592323fbd2bdcdb017139afbae1469afaa061455ea3b25c78ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739566488702191516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994dd91a124ad714ede1b4c05c50ddde3d7ea84ba27cc1a1ecc12405487fe398,PodSandboxId:80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d
96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739566449683136868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2286f9d71df7def774c10528d4cd36543fc71b90be5b94168ea01a057f2c38,PodSandboxId:384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739566449670796063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7bd48c11c9eb659ecfc8997021cbf82bab18dafb8e001e55683715c02c0513,PodSandboxId:3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa
1b23d1,State:CONTAINER_EXITED,CreatedAt:1739566445880372535,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108ec1056a657dfb698f9c131cdf1d6c6782572019a77276b18d11be574c241,PodSandboxId:216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d3
5,State:CONTAINER_EXITED,CreatedAt:1739566445868577919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa9fc593724cb25843c253b472359eadf6e8cdd39c3c4276561a205792108a48,PodSandboxId:aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State
:CONTAINER_EXITED,CreatedAt:1739566445857389130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ac1dc15f0c1b85608bf44bd981d38453d4c505b176dfef3edd50ed4e48bed0,PodSandboxId:ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739566432161
128044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e861027d-e2c1-4a42-8704-ab53c50eeb93 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.949023835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b73ef65-8a5d-4392-8048-bebbd9d7777b name=/runtime.v1.RuntimeService/Version
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.949159298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b73ef65-8a5d-4392-8048-bebbd9d7777b name=/runtime.v1.RuntimeService/Version
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.950479874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=960356da-c732-44ff-8b5f-a3e27361a11d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.951259475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566714951238606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=960356da-c732-44ff-8b5f-a3e27361a11d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.951942848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fc4cb8c-cd2a-4ad2-9902-3a2aafe78ecc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.951988793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fc4cb8c-cd2a-4ad2-9902-3a2aafe78ecc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 20:58:34 functional-471578 crio[4719]: time="2025-02-14 20:58:34.952431510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4a67ca596fa74b43713ef1750789b976356c57e70ca91d3e6eb6ad439596d75,PodSandboxId:439a5df89d93f8fd661794723d2963018251fd846565cafaa17c683380f70997,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1739566548258877974,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-58ccfd96bb-747xg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a90e7a5a-b398-44ce-93f1-bffccab1a52a,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c58791c7d9f1eccf5d18b719bb58116527d9827ab747da2926ca2f67568518c,PodSandboxId:48c03558dab4bee601062561a54929ff7b670f203b3cf6f0d26e30516ca440ff,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1739566531984246837,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-7779f9b69b-hnktj,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 3c5198ea-
e2ae-48e3-b4d5-5cdf501f05c8,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8ca53617bc5546991c61b0f56042be951d12a3347fb5c7e19eb4f464daff315,PodSandboxId:5567c700f7d1046be507f86704f21146be12088462f985090283bda50383cda5,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1739566527548243098,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-58f9cf68d8-hfj77,io.kuber
netes.pod.namespace: default,io.kubernetes.pod.uid: fa3fc291-1d4f-4383-8342-5159912789e8,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16924a73642331123142c5444ccc89ab4d4d09a80bf8bd4145f447467aaa3893,PodSandboxId:8f1c76c2b96203e2dd3aa18c8db1c9c5b27f347a4a8b7429106dd3501f54cfe3,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1739566522229430975,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kuberne
tes.pod.name: dashboard-metrics-scraper-5d59dccf9b-tmt8b,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8a27440e-6e7c-416f-8e80-bf8bac2eba4f,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d6787fad19b80b5fe428eb2b51df99dfe33e04b50553896add4d20d3bcb68e7,PodSandboxId:eaefb7c1fb746b2d91792c5e92517eb7480971d30d8aa59876df7056f213e9cb,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAI
NER_EXITED,CreatedAt:1739566519924193947,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c09ee777-fe60-42ad-aaab-ed0413070a94,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cbe8872ae2e2662e02035719c21d19b111ec118ab51a8d282406858a21012b,PodSandboxId:3757748a039486700c3bff9676adb078ee19782b9d67aed25d8d150a97bb9892,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_R
UNNING,CreatedAt:1739566517559766878,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-fcfd88b6f-8pkq2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1f45186a-94f5-420e-a40e-3c3aab735c45,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20ef33c4de4d402cf1a15eb56db9b924ad4da758dd4e2aa476ae9c70b778357b,PodSandboxId:3d79caed29292ed849e9de3cd7c92025facf3c218d812f48f8622130d10e7b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739566492840
275631,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc81dbd56c4ec7ecffc54a4fcbae34e9a41edeadf370a056c5efc905039a16e4,PodSandboxId:779abdec562505da770bcb70adbdc54e1e81fa3e85c97fa7d03ecdd865c5452b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded0
87897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739566492535005549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6897bec9e0153f069d5189fa970587df58277ba4d5eef345a1a504996e31413b,PodSandboxId:81613830c853aecdfa6ac7d250e335ed605defad04e2535bb68d3009ae1b7b5d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739566492558370261,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a9a797c0f02d0cbdcb4ffca691447918713640c2f1afd9cd72f7cc8f2520c0b,PodSandboxId:2a44e6dba5e8183de6dd11630ecedb30fef4e07e3b140b3259f24f53fe87b5b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739566488836217410,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a4fa6e91fa06e0f374243bc33b72fa46a5b15a1ca1a075b8f0b21f025d8134,PodSandboxId:5a02c36c0053b4a363b5973e26c58868bfbc014f6c97e94aceea9cfa3fb63b2a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739566488780921951,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d41d965366fad4a2eae97d1b2c5ace064fced44262849fdc2e4b4338c318fef,PodSandboxId:e652a3aa798e2fb14bd9e41f86a441cfdb82696a2472f0b036c5eaeb18e482da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739566488789449345,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c416bb85de0f13e7d3a78fbc4fcfa76,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e533c683e67f628887483aa7b0805f7ce209dd173c5803d3813f1943c8f7d50b,PodSandboxId:e2399b83d18ca592323fbd2bdcdb017139afbae1469afaa061455ea3b25c78ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRe
f:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739566488702191516,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:994dd91a124ad714ede1b4c05c50ddde3d7ea84ba27cc1a1ecc12405487fe398,PodSandboxId:80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d
96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739566449683136868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kf2lm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06c6827b-b00d-4b5e-abc7-cf2bab7309bf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2286f9d71df7def774c10528d4cd36543fc71b90be5b94168ea01a057f2c38,PodSandboxId:384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1739566449670796063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bddb02e-1c49-4cbc-ac4d-bd8db4393502,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da7bd48c11c9eb659ecfc8997021cbf82bab18dafb8e001e55683715c02c0513,PodSandboxId:3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa
1b23d1,State:CONTAINER_EXITED,CreatedAt:1739566445880372535,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03875c89b3a9dfb8a5a672ab1d6ca8d7,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108ec1056a657dfb698f9c131cdf1d6c6782572019a77276b18d11be574c241,PodSandboxId:216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d3
5,State:CONTAINER_EXITED,CreatedAt:1739566445868577919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1958b59b654725d1613da743048bc13e,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa9fc593724cb25843c253b472359eadf6e8cdd39c3c4276561a205792108a48,PodSandboxId:aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State
:CONTAINER_EXITED,CreatedAt:1739566445857389130,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-471578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a42ff0becb25fc702e0745d4cc25fdbd,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ac1dc15f0c1b85608bf44bd981d38453d4c505b176dfef3edd50ed4e48bed0,PodSandboxId:ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739566432161
128044,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bqb5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32cdd61-218a-4b2c-a211-e394ec28e134,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fc4cb8c-cd2a-4ad2-9902-3a2aafe78ecc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a4a67ca596fa7       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago       Running             mysql                       0                   439a5df89d93f       mysql-58ccfd96bb-747xg
	8c58791c7d9f1       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         3 minutes ago       Running             kubernetes-dashboard        0                   48c03558dab4b       kubernetes-dashboard-7779f9b69b-hnktj
	c8ca53617bc55       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 3 minutes ago       Running             echoserver                  0                   5567c700f7d10       hello-node-connect-58f9cf68d8-hfj77
	16924a7364233       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   8f1c76c2b9620       dashboard-metrics-scraper-5d59dccf9b-tmt8b
	6d6787fad19b8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   eaefb7c1fb746       busybox-mount
	10cbe8872ae2e       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   3757748a03948       hello-node-fcfd88b6f-8pkq2
	20ef33c4de4d4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     2                   3d79caed29292       coredns-668d6bf9bc-bqb5l
	6897bec9e0153       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         4                   81613830c853a       storage-provisioner
	cc81dbd56c4ec       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 3 minutes ago       Running             kube-proxy                  3                   779abdec56250       kube-proxy-kf2lm
	1a9a797c0f02d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago       Running             etcd                        3                   2a44e6dba5e81       etcd-functional-471578
	3d41d965366fa       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 3 minutes ago       Running             kube-apiserver              0                   e652a3aa798e2       kube-apiserver-functional-471578
	36a4fa6e91fa0       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 3 minutes ago       Running             kube-controller-manager     3                   5a02c36c0053b       kube-controller-manager-functional-471578
	e533c683e67f6       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 3 minutes ago       Running             kube-scheduler              3                   e2399b83d18ca       kube-scheduler-functional-471578
	994dd91a124ad       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 4 minutes ago       Exited              kube-proxy                  2                   80eb32d389aed       kube-proxy-kf2lm
	af2286f9d71df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         3                   384998fd91b0f       storage-provisioner
	da7bd48c11c9e       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 4 minutes ago       Exited              kube-scheduler              2                   3c83f9a34ced4       kube-scheduler-functional-471578
	2108ec1056a65       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 4 minutes ago       Exited              kube-controller-manager     2                   216fa0e455c73       kube-controller-manager-functional-471578
	aa9fc593724cb       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago       Exited              etcd                        2                   aa3f609bf847d       etcd-functional-471578
	34ac1dc15f0c1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     1                   ae680346afd20       coredns-668d6bf9bc-bqb5l
	
	
	==> coredns [20ef33c4de4d402cf1a15eb56db9b924ad4da758dd4e2aa476ae9c70b778357b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:32971 - 19532 "HINFO IN 851207100841468033.5578299001953400391. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012525724s
	
	
	==> coredns [34ac1dc15f0c1b85608bf44bd981d38453d4c505b176dfef3edd50ed4e48bed0] <==
	[INFO] plugin/kubernetes: Trace[2104666647]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 20:53:52.506) (total time: 10012ms):
	Trace[2104666647]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10012ms (20:54:02.518)
	Trace[2104666647]: [10.012247042s] [10.012247042s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1799351296]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 20:53:52.509) (total time: 10009ms):
	Trace[1799351296]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10009ms (20:54:02.518)
	Trace[1799351296]: [10.009625751s] [10.009625751s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[393540064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 20:53:52.509) (total time: 10009ms):
	Trace[393540064]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10009ms (20:54:02.519)
	Trace[393540064]: [10.009812951s] [10.009812951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-471578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-471578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=functional-471578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T20_53_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 20:52:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-471578
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 20:58:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 20:56:23 +0000   Fri, 14 Feb 2025 20:52:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 20:56:23 +0000   Fri, 14 Feb 2025 20:52:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 20:56:23 +0000   Fri, 14 Feb 2025 20:52:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 20:56:23 +0000   Fri, 14 Feb 2025 20:53:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    functional-471578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 480ce6965fc94bb4bd3169ab95f4ca49
	  System UUID:                480ce696-5fc9-4bb4-bd31-69ab95f4ca49
	  Boot ID:                    07c89edd-bdf8-4d0f-b4f6-ce825fb0f77f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-hfj77           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-node-fcfd88b6f-8pkq2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  default                     mysql-58ccfd96bb-747xg                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    2m59s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-bqb5l                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m30s
	  kube-system                 etcd-functional-471578                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m35s
	  kube-system                 kube-apiserver-functional-471578              250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 kube-controller-manager-functional-471578     200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-proxy-kf2lm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-functional-471578              100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-tmt8b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-hnktj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m35s                  kubelet          Node functional-471578 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m35s                  kubelet          Node functional-471578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s                  kubelet          Node functional-471578 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m34s                  kubelet          Node functional-471578 status is now: NodeReady
	  Normal  RegisteredNode           5m31s                  node-controller  Node functional-471578 event: Registered Node functional-471578 in Controller
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node functional-471578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node functional-471578 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node functional-471578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m24s                  node-controller  Node functional-471578 event: Registered Node functional-471578 in Controller
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m47s)  kubelet          Node functional-471578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m47s)  kubelet          Node functional-471578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m47s)  kubelet          Node functional-471578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m40s                  node-controller  Node functional-471578 event: Registered Node functional-471578 in Controller
	
	
	==> dmesg <==
	[  +0.158201] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
	[  +0.121216] systemd-fstab-generator[2269]: Ignoring "noauto" option for root device
	[  +0.247869] systemd-fstab-generator[2297]: Ignoring "noauto" option for root device
	[  +0.624496] systemd-fstab-generator[2419]: Ignoring "noauto" option for root device
	[Feb14 20:54] kauditd_printk_skb: 203 callbacks suppressed
	[  +4.305695] systemd-fstab-generator[3325]: Ignoring "noauto" option for root device
	[  +0.695156] kauditd_printk_skb: 24 callbacks suppressed
	[  +4.439269] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +1.646865] kauditd_printk_skb: 47 callbacks suppressed
	[ +25.562472] systemd-fstab-generator[4643]: Ignoring "noauto" option for root device
	[  +0.131781] systemd-fstab-generator[4655]: Ignoring "noauto" option for root device
	[  +0.150214] systemd-fstab-generator[4669]: Ignoring "noauto" option for root device
	[  +0.131182] systemd-fstab-generator[4681]: Ignoring "noauto" option for root device
	[  +0.256457] systemd-fstab-generator[4709]: Ignoring "noauto" option for root device
	[  +7.917754] systemd-fstab-generator[4835]: Ignoring "noauto" option for root device
	[  +0.068985] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.692666] systemd-fstab-generator[4958]: Ignoring "noauto" option for root device
	[  +4.269258] kauditd_printk_skb: 82 callbacks suppressed
	[  +1.435207] systemd-fstab-generator[5829]: Ignoring "noauto" option for root device
	[Feb14 20:55] kauditd_printk_skb: 63 callbacks suppressed
	[  +8.273385] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.039638] kauditd_printk_skb: 65 callbacks suppressed
	[  +5.083423] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.034227] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.670936] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [1a9a797c0f02d0cbdcb4ffca691447918713640c2f1afd9cd72f7cc8f2520c0b] <==
	{"level":"info","ts":"2025-02-14T20:54:50.748191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.172:2379"}
	{"level":"warn","ts":"2025-02-14T20:55:26.570445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"357.456451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-02-14T20:55:26.570539Z","caller":"traceutil/trace.go:171","msg":"trace[1717350127] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:772; }","duration":"357.622721ms","start":"2025-02-14T20:55:26.212901Z","end":"2025-02-14T20:55:26.570524Z","steps":["trace[1717350127] 'range keys from in-memory index tree'  (duration: 357.392687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:26.570576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:55:26.212887Z","time spent":"357.676032ms","remote":"127.0.0.1:52086","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":195,"request content":"key:\"/registry/serviceaccounts/default/default\" limit:1 "}
	{"level":"warn","ts":"2025-02-14T20:55:26.570587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"389.662584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-02-14T20:55:26.570643Z","caller":"traceutil/trace.go:171","msg":"trace[800909980] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:772; }","duration":"389.740186ms","start":"2025-02-14T20:55:26.180894Z","end":"2025-02-14T20:55:26.570634Z","steps":["trace[800909980] 'range keys from in-memory index tree'  (duration: 389.54651ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:26.570664Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:55:26.180877Z","time spent":"389.780101ms","remote":"127.0.0.1:52056","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1141,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"info","ts":"2025-02-14T20:55:31.053340Z","caller":"traceutil/trace.go:171","msg":"trace[813398558] linearizableReadLoop","detail":"{readStateIndex:865; appliedIndex:864; }","duration":"318.090092ms","start":"2025-02-14T20:55:30.735239Z","end":"2025-02-14T20:55:31.053329Z","steps":["trace[813398558] 'read index received'  (duration: 317.987826ms)","trace[813398558] 'applied index is now lower than readState.Index'  (duration: 101.933µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T20:55:31.053506Z","caller":"traceutil/trace.go:171","msg":"trace[276974474] transaction","detail":"{read_only:false; response_revision:788; number_of_response:1; }","duration":"455.29487ms","start":"2025-02-14T20:55:30.598202Z","end":"2025-02-14T20:55:31.053497Z","steps":["trace[276974474] 'process raft request'  (duration: 455.05748ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:31.053972Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:55:30.598187Z","time spent":"455.359814ms","remote":"127.0.0.1:52056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:782 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-02-14T20:55:31.054163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.919425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T20:55:31.054185Z","caller":"traceutil/trace.go:171","msg":"trace[1186607980] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:788; }","duration":"318.959044ms","start":"2025-02-14T20:55:30.735219Z","end":"2025-02-14T20:55:31.054178Z","steps":["trace[1186607980] 'agreement among raft nodes before linearized reading'  (duration: 318.919458ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:31.054202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:55:30.735204Z","time spent":"318.995103ms","remote":"127.0.0.1:52068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-02-14T20:55:37.212619Z","caller":"traceutil/trace.go:171","msg":"trace[392853894] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"137.343831ms","start":"2025-02-14T20:55:37.075261Z","end":"2025-02-14T20:55:37.212605Z","steps":["trace[392853894] 'process raft request'  (duration: 137.237362ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:55:43.077815Z","caller":"traceutil/trace.go:171","msg":"trace[34508048] transaction","detail":"{read_only:false; response_revision:834; number_of_response:1; }","duration":"260.802613ms","start":"2025-02-14T20:55:42.816999Z","end":"2025-02-14T20:55:43.077802Z","steps":["trace[34508048] 'process raft request'  (duration: 260.564935ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:55:45.739310Z","caller":"traceutil/trace.go:171","msg":"trace[1007307632] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"497.614247ms","start":"2025-02-14T20:55:45.241684Z","end":"2025-02-14T20:55:45.739298Z","steps":["trace[1007307632] 'process raft request'  (duration: 497.539043ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:45.739399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-14T20:55:45.241669Z","time spent":"497.680858ms","remote":"127.0.0.1:52056","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:837 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-02-14T20:55:47.948968Z","caller":"traceutil/trace.go:171","msg":"trace[70711532] linearizableReadLoop","detail":"{readStateIndex:919; appliedIndex:918; }","duration":"170.148288ms","start":"2025-02-14T20:55:47.778744Z","end":"2025-02-14T20:55:47.948892Z","steps":["trace[70711532] 'read index received'  (duration: 169.835991ms)","trace[70711532] 'applied index is now lower than readState.Index'  (duration: 311.622µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-14T20:55:47.949324Z","caller":"traceutil/trace.go:171","msg":"trace[2121039679] transaction","detail":"{read_only:false; response_revision:839; number_of_response:1; }","duration":"201.562449ms","start":"2025-02-14T20:55:47.747748Z","end":"2025-02-14T20:55:47.949310Z","steps":["trace[2121039679] 'process raft request'  (duration: 200.920144ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-14T20:55:47.949379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.624888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T20:55:47.950509Z","caller":"traceutil/trace.go:171","msg":"trace[1895343944] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:839; }","duration":"171.775298ms","start":"2025-02-14T20:55:47.778723Z","end":"2025-02-14T20:55:47.950498Z","steps":["trace[1895343944] 'agreement among raft nodes before linearized reading'  (duration: 170.624162ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:55:49.430524Z","caller":"traceutil/trace.go:171","msg":"trace[1137723434] linearizableReadLoop","detail":"{readStateIndex:924; appliedIndex:923; }","duration":"184.088971ms","start":"2025-02-14T20:55:49.246424Z","end":"2025-02-14T20:55:49.430513Z","steps":["trace[1137723434] 'read index received'  (duration: 183.98903ms)","trace[1137723434] 'applied index is now lower than readState.Index'  (duration: 99.469µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-14T20:55:49.430601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.164647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-14T20:55:49.430617Z","caller":"traceutil/trace.go:171","msg":"trace[10188294] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:843; }","duration":"184.204114ms","start":"2025-02-14T20:55:49.246408Z","end":"2025-02-14T20:55:49.430612Z","steps":["trace[10188294] 'agreement among raft nodes before linearized reading'  (duration: 184.152957ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-14T20:55:49.430808Z","caller":"traceutil/trace.go:171","msg":"trace[1974483270] transaction","detail":"{read_only:false; response_revision:843; number_of_response:1; }","duration":"202.383479ms","start":"2025-02-14T20:55:49.228418Z","end":"2025-02-14T20:55:49.430801Z","steps":["trace[1974483270] 'process raft request'  (duration: 202.029942ms)"],"step_count":1}
	
	
	==> etcd [aa9fc593724cb25843c253b472359eadf6e8cdd39c3c4276561a205792108a48] <==
	{"level":"info","ts":"2025-02-14T20:54:07.280644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-14T20:54:07.280662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2025-02-14T20:54:07.280672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 3"}
	{"level":"info","ts":"2025-02-14T20:54:07.280678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2025-02-14T20:54:07.280685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 3"}
	{"level":"info","ts":"2025-02-14T20:54:07.280692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2025-02-14T20:54:07.285780Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:functional-471578 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T20:54:07.285829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T20:54:07.286280Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T20:54:07.286753Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T20:54:07.287350Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.172:2379"}
	{"level":"info","ts":"2025-02-14T20:54:07.287990Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T20:54:07.288539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T20:54:07.289692Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T20:54:07.289725Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T20:54:31.499387Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-14T20:54:31.499520Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-471578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"]}
	{"level":"warn","ts":"2025-02-14T20:54:31.499611Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T20:54:31.499742Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T20:54:31.546794Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.172:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-14T20:54:31.546837Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.172:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-14T20:54:31.547199Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bbf1bb039b0d3451","current-leader-member-id":"bbf1bb039b0d3451"}
	{"level":"info","ts":"2025-02-14T20:54:31.550099Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2025-02-14T20:54:31.550292Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2025-02-14T20:54:31.550322Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-471578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"]}
	
	
	==> kernel <==
	 20:58:35 up 6 min,  0 users,  load average: 0.18, 0.49, 0.26
	Linux functional-471578 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d41d965366fad4a2eae97d1b2c5ace064fced44262849fdc2e4b4338c318fef] <==
	I0214 20:54:51.921402       1 aggregator.go:171] initial CRD sync complete...
	I0214 20:54:51.921433       1 autoregister_controller.go:144] Starting autoregister controller
	I0214 20:54:51.921439       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 20:54:51.921444       1 cache.go:39] Caches are synced for autoregister controller
	E0214 20:54:51.928159       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 20:54:51.976676       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 20:54:52.160754       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0214 20:54:52.812113       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 20:54:53.327751       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0214 20:54:53.357164       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0214 20:54:53.379782       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 20:54:53.385100       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 20:54:55.104970       1 controller.go:615] quota admission added evaluator for: endpoints
	I0214 20:54:55.352420       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 20:54:55.403549       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0214 20:55:09.209911       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.172.51"}
	I0214 20:55:13.530490       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.61.50"}
	I0214 20:55:17.822777       1 controller.go:615] quota admission added evaluator for: namespaces
	I0214 20:55:18.092813       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.69.83"}
	I0214 20:55:18.126387       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.153.38"}
	I0214 20:55:26.026411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.218.181"}
	E0214 20:55:32.849975       1 conn.go:339] Error on socket receive: read tcp 192.168.39.172:8441->192.168.39.1:37746: use of closed network connection
	I0214 20:55:35.947592       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.109.225"}
	E0214 20:55:56.138979       1 conn.go:339] Error on socket receive: read tcp 192.168.39.172:8441->192.168.39.1:50744: use of closed network connection
	E0214 20:55:57.354427       1 conn.go:339] Error on socket receive: read tcp 192.168.39.172:8441->192.168.39.1:50760: use of closed network connection
	
	
	==> kube-controller-manager [2108ec1056a657dfb698f9c131cdf1d6c6782572019a77276b18d11be574c241] <==
	I0214 20:54:11.676623       1 shared_informer.go:320] Caches are synced for ephemeral
	I0214 20:54:11.677103       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0214 20:54:11.677161       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0214 20:54:11.677191       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0214 20:54:11.677199       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0214 20:54:11.677230       1 shared_informer.go:320] Caches are synced for crt configmap
	I0214 20:54:11.677447       1 shared_informer.go:320] Caches are synced for PV protection
	I0214 20:54:11.678954       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0214 20:54:11.679448       1 shared_informer.go:320] Caches are synced for stateful set
	I0214 20:54:11.681333       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 20:54:11.682145       1 shared_informer.go:320] Caches are synced for HPA
	I0214 20:54:11.683603       1 shared_informer.go:320] Caches are synced for GC
	I0214 20:54:11.685895       1 shared_informer.go:320] Caches are synced for deployment
	I0214 20:54:11.700901       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 20:54:11.700988       1 shared_informer.go:320] Caches are synced for disruption
	I0214 20:54:11.705994       1 shared_informer.go:320] Caches are synced for taint
	I0214 20:54:11.706133       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0214 20:54:11.706378       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-471578"
	I0214 20:54:11.706479       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0214 20:54:11.721613       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 20:54:11.721718       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0214 20:54:11.721742       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0214 20:54:12.086541       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="235.163857ms"
	I0214 20:54:12.095815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.212231ms"
	I0214 20:54:12.096246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="106.39µs"
	
	
	==> kube-controller-manager [36a4fa6e91fa06e0f374243bc33b72fa46a5b15a1ca1a075b8f0b21f025d8134] <==
	I0214 20:55:18.031656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="179.426µs"
	I0214 20:55:18.068957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="39.324968ms"
	I0214 20:55:18.069093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="81.77µs"
	I0214 20:55:18.083159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="32.744µs"
	I0214 20:55:18.463269       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="13.102695ms"
	I0214 20:55:18.463535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="26.603µs"
	I0214 20:55:22.378607       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-471578"
	I0214 20:55:22.526785       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="17.305183ms"
	I0214 20:55:22.526836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="28.501µs"
	I0214 20:55:25.972895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="69.186108ms"
	I0214 20:55:25.998319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="25.393077ms"
	I0214 20:55:25.998406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="40.252µs"
	I0214 20:55:27.603718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="44.785µs"
	I0214 20:55:28.611701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="10.298873ms"
	I0214 20:55:28.611968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="89.122µs"
	I0214 20:55:32.638750       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="7.940758ms"
	I0214 20:55:32.638944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="43.913µs"
	I0214 20:55:36.031952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="23.774672ms"
	I0214 20:55:36.047819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="15.641518ms"
	I0214 20:55:36.048370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="102.822µs"
	I0214 20:55:36.067635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="40.602µs"
	I0214 20:55:49.450246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="13.751113ms"
	I0214 20:55:49.450373       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="34.211µs"
	I0214 20:55:52.667413       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-471578"
	I0214 20:56:23.559331       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-471578"
	
	
	==> kube-proxy [994dd91a124ad714ede1b4c05c50ddde3d7ea84ba27cc1a1ecc12405487fe398] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0214 20:54:09.891536       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0214 20:54:09.901458       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0214 20:54:09.901545       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 20:54:09.948625       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0214 20:54:09.948667       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0214 20:54:09.948687       1 server_linux.go:170] "Using iptables Proxier"
	I0214 20:54:09.953179       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 20:54:09.953376       1 server.go:497] "Version info" version="v1.32.1"
	I0214 20:54:09.953404       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 20:54:09.955119       1 config.go:199] "Starting service config controller"
	I0214 20:54:09.955132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 20:54:09.955150       1 config.go:105] "Starting endpoint slice config controller"
	I0214 20:54:09.955153       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 20:54:09.955453       1 config.go:329] "Starting node config controller"
	I0214 20:54:09.955459       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 20:54:10.055532       1 shared_informer.go:320] Caches are synced for node config
	I0214 20:54:10.055574       1 shared_informer.go:320] Caches are synced for service config
	I0214 20:54:10.055583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [cc81dbd56c4ec7ecffc54a4fcbae34e9a41edeadf370a056c5efc905039a16e4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0214 20:54:52.810256       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0214 20:54:52.823772       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.172"]
	E0214 20:54:52.823948       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 20:54:52.993411       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0214 20:54:52.993439       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0214 20:54:52.993459       1 server_linux.go:170] "Using iptables Proxier"
	I0214 20:54:52.995948       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 20:54:52.996382       1 server.go:497] "Version info" version="v1.32.1"
	I0214 20:54:52.996518       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 20:54:52.997762       1 config.go:199] "Starting service config controller"
	I0214 20:54:52.997851       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 20:54:52.997894       1 config.go:105] "Starting endpoint slice config controller"
	I0214 20:54:52.997911       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 20:54:52.998449       1 config.go:329] "Starting node config controller"
	I0214 20:54:52.998483       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 20:54:53.098654       1 shared_informer.go:320] Caches are synced for node config
	I0214 20:54:53.098787       1 shared_informer.go:320] Caches are synced for service config
	I0214 20:54:53.098799       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [da7bd48c11c9eb659ecfc8997021cbf82bab18dafb8e001e55683715c02c0513] <==
	I0214 20:54:06.383135       1 serving.go:386] Generated self-signed cert in-memory
	W0214 20:54:08.439887       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 20:54:08.439938       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 20:54:08.439949       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 20:54:08.439955       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 20:54:08.493447       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 20:54:08.493494       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 20:54:08.495839       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 20:54:08.495967       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 20:54:08.495910       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 20:54:08.495969       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 20:54:08.596463       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 20:54:31.493629       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0214 20:54:31.493700       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0214 20:54:31.493821       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e533c683e67f628887483aa7b0805f7ce209dd173c5803d3813f1943c8f7d50b] <==
	I0214 20:54:50.021725       1 serving.go:386] Generated self-signed cert in-memory
	W0214 20:54:51.835479       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 20:54:51.835560       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 20:54:51.835584       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 20:54:51.835603       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 20:54:51.906426       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 20:54:51.906462       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 20:54:51.911492       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 20:54:51.912357       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 20:54:51.912376       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 20:54:51.912396       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 20:54:52.013877       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 20:57:28 functional-471578 kubelet[4965]: E0214 20:57:28.287399    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566648287085632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:28 functional-471578 kubelet[4965]: E0214 20:57:28.287423    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566648287085632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:38 functional-471578 kubelet[4965]: E0214 20:57:38.289590    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566658288821429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:38 functional-471578 kubelet[4965]: E0214 20:57:38.289613    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566658288821429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.150584    4965 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 14 20:57:48 functional-471578 kubelet[4965]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 14 20:57:48 functional-471578 kubelet[4965]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 14 20:57:48 functional-471578 kubelet[4965]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 14 20:57:48 functional-471578 kubelet[4965]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.192400    4965 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1958b59b654725d1613da743048bc13e/crio-216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e: Error finding container 216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e: Status 404 returned error can't find the container with id 216fa0e455c73e1a3c5972034863792aad56c432ccf46281fc64da6aef84df5e
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.192832    4965 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda42ff0becb25fc702e0745d4cc25fdbd/crio-aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927: Error finding container aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927: Status 404 returned error can't find the container with id aa3f609bf847d49cf91661d68f71eceeb1ab2f05cdde349bcc5e9357fa83c927
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.193283    4965 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod0bddb02e-1c49-4cbc-ac4d-bd8db4393502/crio-384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2: Error finding container 384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2: Status 404 returned error can't find the container with id 384998fd91b0f7260cc844c3087deaeba2c7f6403462f4f49edaf8934fe7a2e2
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.193496    4965 manager.go:1116] Failed to create existing container: /kubepods/burstable/podc32cdd61-218a-4b2c-a211-e394ec28e134/crio-ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac: Error finding container ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac: Status 404 returned error can't find the container with id ae680346afd20f97887c48d8e0d6e6a39db87479a43f775573132d0d96ba59ac
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.193641    4965 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod03875c89b3a9dfb8a5a672ab1d6ca8d7/crio-3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3: Error finding container 3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3: Status 404 returned error can't find the container with id 3c83f9a34ced4dba46062230a3ba8528dd7e678f3746ac647df89644fe143cb3
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.193870    4965 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod06c6827b-b00d-4b5e-abc7-cf2bab7309bf/crio-80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a: Error finding container 80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a: Status 404 returned error can't find the container with id 80eb32d389aeded8a0eb406b43fc75e9bd5abb166425d21cbde694eb5661173a
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.291616    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566668291261883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:48 functional-471578 kubelet[4965]: E0214 20:57:48.291753    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566668291261883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:58 functional-471578 kubelet[4965]: E0214 20:57:58.295151    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566678294814811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:57:58 functional-471578 kubelet[4965]: E0214 20:57:58.295192    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566678294814811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:08 functional-471578 kubelet[4965]: E0214 20:58:08.297075    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566688296571999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:08 functional-471578 kubelet[4965]: E0214 20:58:08.297383    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566688296571999,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:18 functional-471578 kubelet[4965]: E0214 20:58:18.299785    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566698299140772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:18 functional-471578 kubelet[4965]: E0214 20:58:18.299868    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566698299140772,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:28 functional-471578 kubelet[4965]: E0214 20:58:28.302356    4965 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566708302000903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 20:58:28 functional-471578 kubelet[4965]: E0214 20:58:28.302816    4965 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739566708302000903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:281110,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [8c58791c7d9f1eccf5d18b719bb58116527d9827ab747da2926ca2f67568518c] <==
	2025/02/14 20:55:32 Starting overwatch
	2025/02/14 20:55:32 Using namespace: kubernetes-dashboard
	2025/02/14 20:55:32 Using in-cluster config to connect to apiserver
	2025/02/14 20:55:32 Using secret token for csrf signing
	2025/02/14 20:55:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/14 20:55:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/14 20:55:32 Successful initial request to the apiserver, version: v1.32.1
	2025/02/14 20:55:32 Generating JWE encryption key
	2025/02/14 20:55:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/14 20:55:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/14 20:55:32 Initializing JWE encryption key from synchronized object
	2025/02/14 20:55:32 Creating in-cluster Sidecar client
	2025/02/14 20:55:32 Successful request to sidecar
	2025/02/14 20:55:32 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [6897bec9e0153f069d5189fa970587df58277ba4d5eef345a1a504996e31413b] <==
	I0214 20:54:52.674776       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 20:54:52.692543       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 20:54:52.692649       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 20:55:10.089985       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 20:55:10.090291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-471578_f00883c6-cb80-4b56-b949-cae44c184690!
	I0214 20:55:10.091177       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6b503de-3606-4988-9854-9f2dbe0be89c", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-471578_f00883c6-cb80-4b56-b949-cae44c184690 became leader
	I0214 20:55:10.191095       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-471578_f00883c6-cb80-4b56-b949-cae44c184690!
	I0214 20:55:19.543226       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0214 20:55:19.544246       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"37157e8a-a2bd-40fa-a086-0425d6f650a5", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0214 20:55:19.543418       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f7d387e9-c3a6-4891-b849-41406cfd5a78 333 0 2025-02-14 20:53:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-14 20:53:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-37157e8a-a2bd-40fa-a086-0425d6f650a5 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  37157e8a-a2bd-40fa-a086-0425d6f650a5 721 0 2025-02-14 20:55:19 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-14 20:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-14 20:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0214 20:55:19.547217       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-37157e8a-a2bd-40fa-a086-0425d6f650a5" provisioned
	I0214 20:55:19.547282       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0214 20:55:19.547306       1 volume_store.go:212] Trying to save persistentvolume "pvc-37157e8a-a2bd-40fa-a086-0425d6f650a5"
	I0214 20:55:19.557360       1 volume_store.go:219] persistentvolume "pvc-37157e8a-a2bd-40fa-a086-0425d6f650a5" saved
	I0214 20:55:19.557644       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"37157e8a-a2bd-40fa-a086-0425d6f650a5", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-37157e8a-a2bd-40fa-a086-0425d6f650a5
	
	
	==> storage-provisioner [af2286f9d71df7def774c10528d4cd36543fc71b90be5b94168ea01a057f2c38] <==
	I0214 20:54:09.786788       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 20:54:09.810681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 20:54:09.810908       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 20:54:27.211327       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 20:54:27.211581       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-471578_121af233-c17e-496d-b266-1dadde87e334!
	I0214 20:54:27.211749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6b503de-3606-4988-9854-9f2dbe0be89c", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-471578_121af233-c17e-496d-b266-1dadde87e334 became leader
	I0214 20:54:27.312910       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-471578_121af233-c17e-496d-b266-1dadde87e334!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-471578 -n functional-471578
helpers_test.go:261: (dbg) Run:  kubectl --context functional-471578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-471578 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-471578 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-471578/192.168.39.172
	Start Time:       Fri, 14 Feb 2025 20:55:16 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  cri-o://6d6787fad19b80b5fe428eb2b51df99dfe33e04b50553896add4d20d3bcb68e7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 14 Feb 2025 20:55:19 +0000
	      Finished:     Fri, 14 Feb 2025 20:55:20 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rzptz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rzptz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m19s  default-scheduler  Successfully assigned default/busybox-mount to functional-471578
	  Normal  Pulling    3m19s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m17s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.028s (2.028s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m17s  kubelet            Created container: mount-munger
	  Normal  Started    3m17s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-471578/192.168.39.172
	Start Time:       Fri, 14 Feb 2025 20:55:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vh8pc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vh8pc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m2s  default-scheduler  Successfully assigned default/sp-pod to functional-471578

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (202.80s)

                                                
                                    
x
+
TestPreload (279.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-497787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-497787 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m6.300149323s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-497787 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-497787 image pull gcr.io/k8s-minikube/busybox: (2.159304221s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-497787
E0214 21:40:13.382186  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:41:25.370461  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-497787: (1m30.623266639s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-497787 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0214 21:41:42.292982  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-497787 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.448666528s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-497787 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-14 21:42:36.457001669 +0000 UTC m=+3502.161570615
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-497787 -n test-preload-497787
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-497787 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-497787 logs -n 25: (1.036180636s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-806010 ssh -n                                                                 | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|         | multinode-806010-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-806010 ssh -n multinode-806010 sudo cat                                       | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|         | /home/docker/cp-test_multinode-806010-m03_multinode-806010.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-806010 cp multinode-806010-m03:/home/docker/cp-test.txt                       | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|         | multinode-806010-m02:/home/docker/cp-test_multinode-806010-m03_multinode-806010-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-806010 ssh -n                                                                 | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|         | multinode-806010-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-806010 ssh -n multinode-806010-m02 sudo cat                                   | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	|         | /home/docker/cp-test_multinode-806010-m03_multinode-806010-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-806010 node stop m03                                                          | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:26 UTC |
	| node    | multinode-806010 node start                                                             | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:26 UTC | 14 Feb 25 21:27 UTC |
	|         | m03 -v=5 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-806010                                                                | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC |                     |
	| stop    | -p multinode-806010                                                                     | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:27 UTC | 14 Feb 25 21:30 UTC |
	| start   | -p multinode-806010                                                                     | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:30 UTC | 14 Feb 25 21:32 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-806010                                                                | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:32 UTC |                     |
	| node    | multinode-806010 node delete                                                            | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:32 UTC | 14 Feb 25 21:32 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-806010 stop                                                                   | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:32 UTC | 14 Feb 25 21:35 UTC |
	| start   | -p multinode-806010                                                                     | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:35 UTC | 14 Feb 25 21:37 UTC |
	|         | --wait=true -v=5                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-806010                                                                | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC |                     |
	| start   | -p multinode-806010-m02                                                                 | multinode-806010-m02 | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-806010-m03                                                                 | multinode-806010-m03 | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC | 14 Feb 25 21:37 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-806010                                                                 | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC |                     |
	| delete  | -p multinode-806010-m03                                                                 | multinode-806010-m03 | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC | 14 Feb 25 21:37 UTC |
	| delete  | -p multinode-806010                                                                     | multinode-806010     | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC | 14 Feb 25 21:37 UTC |
	| start   | -p test-preload-497787                                                                  | test-preload-497787  | jenkins | v1.35.0 | 14 Feb 25 21:37 UTC | 14 Feb 25 21:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-497787 image pull                                                          | test-preload-497787  | jenkins | v1.35.0 | 14 Feb 25 21:40 UTC | 14 Feb 25 21:40 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-497787                                                                  | test-preload-497787  | jenkins | v1.35.0 | 14 Feb 25 21:40 UTC | 14 Feb 25 21:41 UTC |
	| start   | -p test-preload-497787                                                                  | test-preload-497787  | jenkins | v1.35.0 | 14 Feb 25 21:41 UTC | 14 Feb 25 21:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-497787 image list                                                          | test-preload-497787  | jenkins | v1.35.0 | 14 Feb 25 21:42 UTC | 14 Feb 25 21:42 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:41:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:41:38.823744  281862 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:41:38.823853  281862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:41:38.823862  281862 out.go:358] Setting ErrFile to fd 2...
	I0214 21:41:38.823866  281862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:41:38.824049  281862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:41:38.824529  281862 out.go:352] Setting JSON to false
	I0214 21:41:38.825373  281862 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8643,"bootTime":1739560656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:41:38.825472  281862 start.go:140] virtualization: kvm guest
	I0214 21:41:38.827206  281862 out.go:177] * [test-preload-497787] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:41:38.828526  281862 notify.go:220] Checking for updates...
	I0214 21:41:38.828541  281862 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:41:38.829595  281862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:41:38.830658  281862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:41:38.831632  281862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:41:38.832572  281862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:41:38.833445  281862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:41:38.834718  281862 config.go:182] Loaded profile config "test-preload-497787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0214 21:41:38.835114  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:41:38.835156  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:41:38.850281  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
	I0214 21:41:38.850730  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:41:38.851250  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:41:38.851276  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:41:38.851606  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:41:38.851799  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:41:38.853145  281862 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0214 21:41:38.854190  281862 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:41:38.854524  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:41:38.854584  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:41:38.868614  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36193
	I0214 21:41:38.868932  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:41:38.869351  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:41:38.869379  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:41:38.869677  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:41:38.869863  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:41:38.901358  281862 out.go:177] * Using the kvm2 driver based on existing profile
	I0214 21:41:38.902414  281862 start.go:304] selected driver: kvm2
	I0214 21:41:38.902428  281862 start.go:908] validating driver "kvm2" against &{Name:test-preload-497787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 Cluster
Name:test-preload-497787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:41:38.902533  281862 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:41:38.903209  281862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:41:38.903278  281862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:41:38.917047  281862 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:41:38.917390  281862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:41:38.917420  281862 cni.go:84] Creating CNI manager for ""
	I0214 21:41:38.917472  281862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:41:38.917537  281862 start.go:347] cluster config:
	{Name:test-preload-497787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-497787 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:41:38.917634  281862 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:41:38.919486  281862 out.go:177] * Starting "test-preload-497787" primary control-plane node in "test-preload-497787" cluster
	I0214 21:41:38.920405  281862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0214 21:41:38.950574  281862 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0214 21:41:38.950593  281862 cache.go:56] Caching tarball of preloaded images
	I0214 21:41:38.950729  281862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0214 21:41:38.951860  281862 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0214 21:41:38.952797  281862 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0214 21:41:38.982485  281862 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0214 21:41:42.498481  281862 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0214 21:41:42.498568  281862 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0214 21:41:43.361965  281862 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0214 21:41:43.362111  281862 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/config.json ...
	I0214 21:41:43.362356  281862 start.go:360] acquireMachinesLock for test-preload-497787: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:41:43.362432  281862 start.go:364] duration metric: took 51.282µs to acquireMachinesLock for "test-preload-497787"
	I0214 21:41:43.362451  281862 start.go:96] Skipping create...Using existing machine configuration
	I0214 21:41:43.362457  281862 fix.go:54] fixHost starting: 
	I0214 21:41:43.362768  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:41:43.362810  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:41:43.377720  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42753
	I0214 21:41:43.378169  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:41:43.378597  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:41:43.378634  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:41:43.378953  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:41:43.379143  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:41:43.379317  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetState
	I0214 21:41:43.380583  281862 fix.go:112] recreateIfNeeded on test-preload-497787: state=Stopped err=<nil>
	I0214 21:41:43.380603  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	W0214 21:41:43.380751  281862 fix.go:138] unexpected machine state, will restart: <nil>
	I0214 21:41:43.383215  281862 out.go:177] * Restarting existing kvm2 VM for "test-preload-497787" ...
	I0214 21:41:43.384459  281862 main.go:141] libmachine: (test-preload-497787) Calling .Start
	I0214 21:41:43.384627  281862 main.go:141] libmachine: (test-preload-497787) starting domain...
	I0214 21:41:43.384640  281862 main.go:141] libmachine: (test-preload-497787) ensuring networks are active...
	I0214 21:41:43.385318  281862 main.go:141] libmachine: (test-preload-497787) Ensuring network default is active
	I0214 21:41:43.385591  281862 main.go:141] libmachine: (test-preload-497787) Ensuring network mk-test-preload-497787 is active
	I0214 21:41:43.385864  281862 main.go:141] libmachine: (test-preload-497787) getting domain XML...
	I0214 21:41:43.386423  281862 main.go:141] libmachine: (test-preload-497787) creating domain...
	I0214 21:41:43.694588  281862 main.go:141] libmachine: (test-preload-497787) waiting for IP...
	I0214 21:41:43.695814  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:43.696199  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:43.696301  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:43.696192  281913 retry.go:31] will retry after 252.13363ms: waiting for domain to come up
	I0214 21:41:43.949490  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:43.949842  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:43.949873  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:43.949808  281913 retry.go:31] will retry after 317.098878ms: waiting for domain to come up
	I0214 21:41:44.268055  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:44.268466  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:44.268496  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:44.268418  281913 retry.go:31] will retry after 455.464576ms: waiting for domain to come up
	I0214 21:41:44.725085  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:44.725444  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:44.725491  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:44.725428  281913 retry.go:31] will retry after 534.000731ms: waiting for domain to come up
	I0214 21:41:45.260840  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:45.261119  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:45.261144  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:45.261086  281913 retry.go:31] will retry after 645.018728ms: waiting for domain to come up
	I0214 21:41:45.907933  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:45.908338  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:45.908368  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:45.908294  281913 retry.go:31] will retry after 806.560208ms: waiting for domain to come up
	I0214 21:41:46.716773  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:46.717194  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:46.717222  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:46.717156  281913 retry.go:31] will retry after 801.798298ms: waiting for domain to come up
	I0214 21:41:47.520716  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:47.521047  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:47.521145  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:47.521013  281913 retry.go:31] will retry after 903.369694ms: waiting for domain to come up
	I0214 21:41:48.426340  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:48.426805  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:48.426838  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:48.426761  281913 retry.go:31] will retry after 1.817303036s: waiting for domain to come up
	I0214 21:41:50.246797  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:50.247146  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:50.247175  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:50.247100  281913 retry.go:31] will retry after 1.664123942s: waiting for domain to come up
	I0214 21:41:51.914020  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:51.914504  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:51.914558  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:51.914486  281913 retry.go:31] will retry after 2.073970893s: waiting for domain to come up
	I0214 21:41:53.989919  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:53.990388  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:53.990416  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:53.990342  281913 retry.go:31] will retry after 3.496769045s: waiting for domain to come up
	I0214 21:41:57.490998  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:41:57.491412  281862 main.go:141] libmachine: (test-preload-497787) DBG | unable to find current IP address of domain test-preload-497787 in network mk-test-preload-497787
	I0214 21:41:57.491514  281862 main.go:141] libmachine: (test-preload-497787) DBG | I0214 21:41:57.491418  281913 retry.go:31] will retry after 3.293529645s: waiting for domain to come up
	I0214 21:42:00.788257  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.788719  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has current primary IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.788746  281862 main.go:141] libmachine: (test-preload-497787) found domain IP: 192.168.39.128
	I0214 21:42:00.788788  281862 main.go:141] libmachine: (test-preload-497787) reserving static IP address...
	I0214 21:42:00.789159  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "test-preload-497787", mac: "52:54:00:9b:ac:c9", ip: "192.168.39.128"} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:00.789177  281862 main.go:141] libmachine: (test-preload-497787) DBG | skip adding static IP to network mk-test-preload-497787 - found existing host DHCP lease matching {name: "test-preload-497787", mac: "52:54:00:9b:ac:c9", ip: "192.168.39.128"}
	I0214 21:42:00.789193  281862 main.go:141] libmachine: (test-preload-497787) reserved static IP address 192.168.39.128 for domain test-preload-497787
	I0214 21:42:00.789215  281862 main.go:141] libmachine: (test-preload-497787) waiting for SSH...
	I0214 21:42:00.789226  281862 main.go:141] libmachine: (test-preload-497787) DBG | Getting to WaitForSSH function...
	I0214 21:42:00.791231  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.791490  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:00.791513  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.791639  281862 main.go:141] libmachine: (test-preload-497787) DBG | Using SSH client type: external
	I0214 21:42:00.791659  281862 main.go:141] libmachine: (test-preload-497787) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa (-rw-------)
	I0214 21:42:00.791744  281862 main.go:141] libmachine: (test-preload-497787) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:42:00.791773  281862 main.go:141] libmachine: (test-preload-497787) DBG | About to run SSH command:
	I0214 21:42:00.791791  281862 main.go:141] libmachine: (test-preload-497787) DBG | exit 0
	I0214 21:42:00.918100  281862 main.go:141] libmachine: (test-preload-497787) DBG | SSH cmd err, output: <nil>: 
	I0214 21:42:00.918429  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetConfigRaw
	I0214 21:42:00.919044  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetIP
	I0214 21:42:00.921211  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.921630  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:00.921660  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.921974  281862 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/config.json ...
	I0214 21:42:00.922185  281862 machine.go:93] provisionDockerMachine start ...
	I0214 21:42:00.922205  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:00.922437  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:00.924711  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.925076  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:00.925094  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:00.925265  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:00.925438  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:00.925566  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:00.925689  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:00.925863  281862 main.go:141] libmachine: Using SSH client type: native
	I0214 21:42:00.926067  281862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0214 21:42:00.926078  281862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:42:01.034612  281862 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0214 21:42:01.034649  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetMachineName
	I0214 21:42:01.034827  281862 buildroot.go:166] provisioning hostname "test-preload-497787"
	I0214 21:42:01.034852  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetMachineName
	I0214 21:42:01.035021  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:01.037040  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.037354  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.037380  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.037480  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:01.037624  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.037770  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.037947  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:01.038098  281862 main.go:141] libmachine: Using SSH client type: native
	I0214 21:42:01.038261  281862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0214 21:42:01.038273  281862 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-497787 && echo "test-preload-497787" | sudo tee /etc/hostname
	I0214 21:42:01.159897  281862 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-497787
	
	I0214 21:42:01.159920  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:01.162369  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.162692  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.162724  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.162834  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:01.162991  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.163113  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.163257  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:01.163424  281862 main.go:141] libmachine: Using SSH client type: native
	I0214 21:42:01.163586  281862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0214 21:42:01.163603  281862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-497787' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-497787/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-497787' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:42:01.278987  281862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:42:01.279012  281862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:42:01.279057  281862 buildroot.go:174] setting up certificates
	I0214 21:42:01.279073  281862 provision.go:84] configureAuth start
	I0214 21:42:01.279086  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetMachineName
	I0214 21:42:01.279281  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetIP
	I0214 21:42:01.281437  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.281799  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.281829  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.281992  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:01.284079  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.284396  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.284434  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.284532  281862 provision.go:143] copyHostCerts
	I0214 21:42:01.284600  281862 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:42:01.284616  281862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:42:01.284682  281862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:42:01.284785  281862 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:42:01.284795  281862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:42:01.284824  281862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:42:01.284896  281862 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:42:01.284904  281862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:42:01.284928  281862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:42:01.284984  281862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.test-preload-497787 san=[127.0.0.1 192.168.39.128 localhost minikube test-preload-497787]
	I0214 21:42:01.714059  281862 provision.go:177] copyRemoteCerts
	I0214 21:42:01.714126  281862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:42:01.714149  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:01.716767  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.717100  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.717133  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.717331  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:01.717496  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.717659  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:01.717787  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:01.800466  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:42:01.827133  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0214 21:42:01.853332  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:42:01.879070  281862 provision.go:87] duration metric: took 599.986865ms to configureAuth
	I0214 21:42:01.879090  281862 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:42:01.879238  281862 config.go:182] Loaded profile config "test-preload-497787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0214 21:42:01.879314  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:01.881299  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.881598  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:01.881625  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:01.881787  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:01.881943  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.882098  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:01.882243  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:01.882419  281862 main.go:141] libmachine: Using SSH client type: native
	I0214 21:42:01.882558  281862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0214 21:42:01.882572  281862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:42:02.117154  281862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:42:02.117184  281862 machine.go:96] duration metric: took 1.194984526s to provisionDockerMachine
	I0214 21:42:02.117200  281862 start.go:293] postStartSetup for "test-preload-497787" (driver="kvm2")
	I0214 21:42:02.117213  281862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:42:02.117236  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:02.117582  281862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:42:02.117615  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:02.120161  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.120454  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:02.120486  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.120636  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:02.120832  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:02.121011  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:02.121158  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:02.204440  281862 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:42:02.208821  281862 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:42:02.208838  281862 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:42:02.208886  281862 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:42:02.208977  281862 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:42:02.209076  281862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:42:02.218533  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:42:02.242038  281862 start.go:296] duration metric: took 124.826856ms for postStartSetup
	I0214 21:42:02.242074  281862 fix.go:56] duration metric: took 18.879616622s for fixHost
	I0214 21:42:02.242091  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:02.244371  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.244671  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:02.244692  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.244840  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:02.244977  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:02.245088  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:02.245229  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:02.245350  281862 main.go:141] libmachine: Using SSH client type: native
	I0214 21:42:02.245527  281862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0214 21:42:02.245539  281862 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:42:02.354751  281862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569322.328378912
	
	I0214 21:42:02.354763  281862 fix.go:216] guest clock: 1739569322.328378912
	I0214 21:42:02.354770  281862 fix.go:229] Guest: 2025-02-14 21:42:02.328378912 +0000 UTC Remote: 2025-02-14 21:42:02.242079475 +0000 UTC m=+23.456360432 (delta=86.299437ms)
	I0214 21:42:02.354791  281862 fix.go:200] guest clock delta is within tolerance: 86.299437ms
	I0214 21:42:02.354801  281862 start.go:83] releasing machines lock for "test-preload-497787", held for 18.992357405s
	I0214 21:42:02.354829  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:02.355010  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetIP
	I0214 21:42:02.357148  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.357488  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:02.357519  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.357653  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:02.358080  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:02.358263  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:02.358378  281862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:42:02.358418  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:02.358475  281862 ssh_runner.go:195] Run: cat /version.json
	I0214 21:42:02.358501  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:02.360982  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.361314  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.361348  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:02.361368  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.361536  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:02.361680  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:02.361798  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:02.361819  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:02.361843  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:02.361933  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:02.361989  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:02.362135  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:02.362291  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:02.362420  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:02.465059  281862 ssh_runner.go:195] Run: systemctl --version
	I0214 21:42:02.470752  281862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:42:02.616375  281862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:42:02.623003  281862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:42:02.623065  281862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:42:02.638413  281862 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 21:42:02.638428  281862 start.go:495] detecting cgroup driver to use...
	I0214 21:42:02.638473  281862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:42:02.653606  281862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:42:02.666704  281862 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:42:02.666742  281862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:42:02.679783  281862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:42:02.693141  281862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:42:02.805670  281862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:42:02.943818  281862 docker.go:233] disabling docker service ...
	I0214 21:42:02.943878  281862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:42:02.957688  281862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:42:02.969996  281862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:42:03.105804  281862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:42:03.232045  281862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:42:03.245030  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:42:03.262153  281862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0214 21:42:03.262195  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.272387  281862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:42:03.272437  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.282719  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.292869  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.303010  281862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:42:03.313323  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.323588  281862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.339975  281862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:42:03.350243  281862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:42:03.359681  281862 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 21:42:03.359727  281862 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 21:42:03.372403  281862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:42:03.381638  281862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:42:03.494975  281862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:42:03.583174  281862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:42:03.583238  281862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:42:03.588109  281862 start.go:563] Will wait 60s for crictl version
	I0214 21:42:03.588161  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:03.591841  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:42:03.632026  281862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:42:03.632111  281862 ssh_runner.go:195] Run: crio --version
	I0214 21:42:03.668330  281862 ssh_runner.go:195] Run: crio --version
	I0214 21:42:03.695520  281862 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0214 21:42:03.696451  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetIP
	I0214 21:42:03.698964  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:03.699238  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:03.699260  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:03.699418  281862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0214 21:42:03.703413  281862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:42:03.715721  281862 kubeadm.go:875] updating cluster {Name:test-preload-497787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-prelo
ad-497787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:42:03.715821  281862 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0214 21:42:03.715861  281862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:42:03.750189  281862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0214 21:42:03.750236  281862 ssh_runner.go:195] Run: which lz4
	I0214 21:42:03.754009  281862 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 21:42:03.757843  281862 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 21:42:03.757865  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0214 21:42:05.254127  281862 crio.go:462] duration metric: took 1.500128019s to copy over tarball
	I0214 21:42:05.254206  281862 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 21:42:07.632794  281862 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.378558493s)
	I0214 21:42:07.632821  281862 crio.go:469] duration metric: took 2.378661839s to extract the tarball
	I0214 21:42:07.632829  281862 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 21:42:07.676428  281862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:42:07.726979  281862 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0214 21:42:07.726997  281862 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 21:42:07.727084  281862 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:07.727108  281862 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:07.727151  281862 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0214 21:42:07.727158  281862 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:07.727059  281862 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:42:07.727232  281862 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:07.727268  281862 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:07.727319  281862 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:07.728929  281862 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:07.728940  281862 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:07.728942  281862 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0214 21:42:07.728958  281862 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:07.728987  281862 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:42:07.728994  281862 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:07.729002  281862 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:07.729028  281862 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:07.885577  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:07.886994  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0214 21:42:07.891034  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:07.895453  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:07.906746  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:07.961489  281862 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0214 21:42:07.961541  281862 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:07.961590  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:07.974301  281862 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0214 21:42:07.974344  281862 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0214 21:42:07.974385  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.002235  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:08.023769  281862 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0214 21:42:08.023813  281862 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:08.023859  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.023876  281862 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0214 21:42:08.023855  281862 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0214 21:42:08.023911  281862 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:08.023925  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:08.023966  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.023931  281862 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:08.023983  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0214 21:42:08.024009  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.024798  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:08.055190  281862 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0214 21:42:08.055224  281862 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:08.055264  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.104158  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:08.104174  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:08.104240  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:08.104279  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:08.104367  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0214 21:42:08.136284  281862 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0214 21:42:08.136340  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:08.136331  281862 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:08.136400  281862 ssh_runner.go:195] Run: which crictl
	I0214 21:42:08.199098  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0214 21:42:08.233406  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0214 21:42:08.242153  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:08.242232  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:08.242279  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:08.289510  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:08.291134  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0214 21:42:08.291233  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0214 21:42:08.291259  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:08.326319  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0214 21:42:08.326461  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0214 21:42:08.360476  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0214 21:42:08.360532  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0214 21:42:08.360562  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0214 21:42:08.395748  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:08.410971  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0214 21:42:08.410980  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0214 21:42:08.411048  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0214 21:42:08.411051  281862 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0214 21:42:08.411121  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0214 21:42:08.469488  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0214 21:42:08.469616  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0214 21:42:08.472642  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0214 21:42:08.472692  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0214 21:42:08.472736  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0214 21:42:08.472772  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0214 21:42:08.497980  281862 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0214 21:42:08.516848  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0214 21:42:08.516949  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0214 21:42:08.655134  281862 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:42:11.436685  281862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.967036793s)
	I0214 21:42:11.436718  281862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.963965238s)
	I0214 21:42:11.436735  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0214 21:42:11.436735  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0214 21:42:11.436773  281862 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (3.025623774s)
	I0214 21:42:11.436801  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0214 21:42:11.436776  281862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.963980869s)
	I0214 21:42:11.436829  281862 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0214 21:42:11.436835  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0214 21:42:11.436845  281862 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.938842758s)
	I0214 21:42:11.436890  281862 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0214 21:42:11.436894  281862 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.919926378s)
	I0214 21:42:11.436911  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0214 21:42:11.436895  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0214 21:42:11.436929  281862 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.781762797s)
	I0214 21:42:11.436985  281862 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0214 21:42:11.586222  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0214 21:42:11.586275  281862 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0214 21:42:11.586353  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0214 21:42:11.586279  281862 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0214 21:42:12.229036  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0214 21:42:12.229092  281862 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0214 21:42:12.229166  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0214 21:42:12.571870  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0214 21:42:12.571924  281862 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0214 21:42:12.571998  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0214 21:42:13.013953  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0214 21:42:13.014008  281862 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0214 21:42:13.014063  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0214 21:42:13.858028  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0214 21:42:13.858095  281862 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0214 21:42:13.858161  281862 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0214 21:42:14.603841  281862 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0214 21:42:14.603888  281862 cache_images.go:123] Successfully loaded all cached images
	I0214 21:42:14.603896  281862 cache_images.go:92] duration metric: took 6.876885426s to LoadCachedImages
	I0214 21:42:14.603913  281862 kubeadm.go:926] updating node { 192.168.39.128 8443 v1.24.4 crio true true} ...
	I0214 21:42:14.604047  281862 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-497787 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-497787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:42:14.604163  281862 ssh_runner.go:195] Run: crio config
	I0214 21:42:14.652396  281862 cni.go:84] Creating CNI manager for ""
	I0214 21:42:14.652413  281862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:42:14.652425  281862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:42:14.652451  281862 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-497787 NodeName:test-preload-497787 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 21:42:14.652615  281862 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-497787"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:42:14.652696  281862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0214 21:42:14.662389  281862 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:42:14.662453  281862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:42:14.671793  281862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0214 21:42:14.687503  281862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:42:14.702946  281862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0214 21:42:14.718878  281862 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0214 21:42:14.722585  281862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:42:14.734210  281862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:42:14.857513  281862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:42:14.873506  281862 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787 for IP: 192.168.39.128
	I0214 21:42:14.873525  281862 certs.go:194] generating shared ca certs ...
	I0214 21:42:14.873543  281862 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:42:14.873714  281862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:42:14.873773  281862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:42:14.873787  281862 certs.go:256] generating profile certs ...
	I0214 21:42:14.873904  281862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.key
	I0214 21:42:14.874008  281862 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/apiserver.key.a40aeebb
	I0214 21:42:14.874071  281862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/proxy-client.key
	I0214 21:42:14.874219  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:42:14.874266  281862 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:42:14.874280  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:42:14.874310  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:42:14.874343  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:42:14.874368  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:42:14.874490  281862 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:42:14.875433  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:42:14.910061  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:42:14.953726  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:42:15.003796  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:42:15.032034  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0214 21:42:15.059540  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 21:42:15.088631  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:42:15.111213  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 21:42:15.133938  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:42:15.156282  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:42:15.178773  281862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:42:15.201016  281862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:42:15.216737  281862 ssh_runner.go:195] Run: openssl version
	I0214 21:42:15.222311  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:42:15.232338  281862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:42:15.236673  281862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:42:15.236711  281862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:42:15.242291  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:42:15.252181  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:42:15.262087  281862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:42:15.266493  281862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:42:15.266532  281862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:42:15.271846  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:42:15.281724  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:42:15.291763  281862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:42:15.296010  281862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:42:15.296045  281862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:42:15.301297  281862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:42:15.311178  281862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:42:15.315704  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 21:42:15.321195  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 21:42:15.326659  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 21:42:15.332114  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 21:42:15.337604  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 21:42:15.343052  281862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 21:42:15.348958  281862 kubeadm.go:392] StartCluster: {Name:test-preload-497787 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-
497787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:42:15.349036  281862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:42:15.349081  281862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:42:15.389978  281862 cri.go:89] found id: ""
	I0214 21:42:15.390025  281862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:42:15.399065  281862 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0214 21:42:15.399084  281862 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0214 21:42:15.399123  281862 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 21:42:15.407794  281862 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 21:42:15.408275  281862 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-497787" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:42:15.408424  281862 kubeconfig.go:62] /home/jenkins/minikube-integration/20315-243456/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-497787" cluster setting kubeconfig missing "test-preload-497787" context setting]
	I0214 21:42:15.408728  281862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:42:15.409326  281862 kapi.go:59] client config for test-preload-497787: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.crt", KeyFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.key", CAFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 21:42:15.409749  281862 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0214 21:42:15.409769  281862 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0214 21:42:15.409776  281862 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0214 21:42:15.409785  281862 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0214 21:42:15.410143  281862 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 21:42:15.418708  281862 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.128
	I0214 21:42:15.418736  281862 kubeadm.go:1152] stopping kube-system containers ...
	I0214 21:42:15.418750  281862 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0214 21:42:15.418790  281862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:42:15.453565  281862 cri.go:89] found id: ""
	I0214 21:42:15.453620  281862 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 21:42:15.468689  281862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:42:15.477598  281862 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:42:15.477619  281862 kubeadm.go:157] found existing configuration files:
	
	I0214 21:42:15.477662  281862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:42:15.485817  281862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:42:15.485863  281862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:42:15.494519  281862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:42:15.502616  281862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:42:15.502666  281862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:42:15.511246  281862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:42:15.519268  281862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:42:15.519313  281862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:42:15.527715  281862 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:42:15.535970  281862 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:42:15.536019  281862 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:42:15.544487  281862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:42:15.553352  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:15.654930  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:16.731489  281862 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076515754s)
	I0214 21:42:16.731527  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:16.996944  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:17.061331  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:17.146740  281862 api_server.go:52] waiting for apiserver process to appear ...
	I0214 21:42:17.146841  281862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:42:17.647519  281862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:42:18.146925  281862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:42:18.187508  281862 api_server.go:72] duration metric: took 1.040764839s to wait for apiserver process to appear ...
	I0214 21:42:18.187548  281862 api_server.go:88] waiting for apiserver healthz status ...
	I0214 21:42:18.187575  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:18.188105  281862 api_server.go:269] stopped: https://192.168.39.128:8443/healthz: Get "https://192.168.39.128:8443/healthz": dial tcp 192.168.39.128:8443: connect: connection refused
	I0214 21:42:18.688159  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:22.052887  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 21:42:22.052916  281862 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 21:42:22.052933  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:22.106969  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0214 21:42:22.106995  281862 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0214 21:42:22.188330  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:22.194144  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0214 21:42:22.194170  281862 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0214 21:42:22.687754  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:22.693016  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0214 21:42:22.693043  281862 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0214 21:42:23.187661  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:23.194125  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0214 21:42:23.194155  281862 api_server.go:103] status: https://192.168.39.128:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0214 21:42:23.688189  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:23.695366  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0214 21:42:23.701818  281862 api_server.go:141] control plane version: v1.24.4
	I0214 21:42:23.701842  281862 api_server.go:131] duration metric: took 5.514287247s to wait for apiserver health ...
	I0214 21:42:23.701852  281862 cni.go:84] Creating CNI manager for ""
	I0214 21:42:23.701858  281862 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:42:23.703442  281862 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 21:42:23.704632  281862 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 21:42:23.720527  281862 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0214 21:42:23.766082  281862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 21:42:23.770416  281862 system_pods.go:59] 7 kube-system pods found
	I0214 21:42:23.770455  281862 system_pods.go:61] "coredns-6d4b75cb6d-wgdqk" [590a6e91-eb8a-467c-bc35-2a02056a582b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 21:42:23.770462  281862 system_pods.go:61] "etcd-test-preload-497787" [7fde59bb-4aec-45ac-907f-df4ba6fe4b97] Running
	I0214 21:42:23.770468  281862 system_pods.go:61] "kube-apiserver-test-preload-497787" [4e919959-53f4-4ec6-a8a4-1767015013c6] Running
	I0214 21:42:23.770474  281862 system_pods.go:61] "kube-controller-manager-test-preload-497787" [a3b13eb5-6d4f-4a78-8a0f-a76dd60e6221] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 21:42:23.770478  281862 system_pods.go:61] "kube-proxy-4fsqn" [0a9e5652-0991-419d-a6f4-75341aa44455] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0214 21:42:23.770482  281862 system_pods.go:61] "kube-scheduler-test-preload-497787" [53b8ef06-0926-40f6-9c9f-cabf403d0de5] Running
	I0214 21:42:23.770491  281862 system_pods.go:61] "storage-provisioner" [16d48314-505e-4bde-825c-23e2606ed1eb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 21:42:23.770501  281862 system_pods.go:74] duration metric: took 4.39893ms to wait for pod list to return data ...
	I0214 21:42:23.770509  281862 node_conditions.go:102] verifying NodePressure condition ...
	I0214 21:42:23.772603  281862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 21:42:23.772639  281862 node_conditions.go:123] node cpu capacity is 2
	I0214 21:42:23.772655  281862 node_conditions.go:105] duration metric: took 2.140364ms to run NodePressure ...
	I0214 21:42:23.772678  281862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:42:24.015062  281862 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0214 21:42:24.020026  281862 kubeadm.go:735] kubelet initialised
	I0214 21:42:24.020049  281862 kubeadm.go:736] duration metric: took 4.95863ms waiting for restarted kubelet to initialise ...
	I0214 21:42:24.020068  281862 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 21:42:24.043718  281862 ops.go:34] apiserver oom_adj: -16
	I0214 21:42:24.043742  281862 kubeadm.go:593] duration metric: took 8.644649351s to restartPrimaryControlPlane
	I0214 21:42:24.043753  281862 kubeadm.go:394] duration metric: took 8.69480107s to StartCluster
	I0214 21:42:24.043776  281862 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:42:24.043867  281862 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:42:24.044811  281862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:42:24.045131  281862 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:42:24.045242  281862 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 21:42:24.045347  281862 addons.go:69] Setting storage-provisioner=true in profile "test-preload-497787"
	I0214 21:42:24.045365  281862 config.go:182] Loaded profile config "test-preload-497787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0214 21:42:24.045370  281862 addons.go:238] Setting addon storage-provisioner=true in "test-preload-497787"
	I0214 21:42:24.045397  281862 addons.go:69] Setting default-storageclass=true in profile "test-preload-497787"
	W0214 21:42:24.045415  281862 addons.go:247] addon storage-provisioner should already be in state true
	I0214 21:42:24.045431  281862 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-497787"
	I0214 21:42:24.045471  281862 host.go:66] Checking if "test-preload-497787" exists ...
	I0214 21:42:24.045786  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:42:24.045814  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:42:24.045831  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:42:24.045846  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:42:24.046492  281862 out.go:177] * Verifying Kubernetes components...
	I0214 21:42:24.047873  281862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:42:24.061721  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
	I0214 21:42:24.061735  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39905
	I0214 21:42:24.062179  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:42:24.062221  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:42:24.062721  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:42:24.062747  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:42:24.062854  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:42:24.062874  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:42:24.063079  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:42:24.063253  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:42:24.063314  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetState
	I0214 21:42:24.063821  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:42:24.063866  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:42:24.065574  281862 kapi.go:59] client config for test-preload-497787: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.crt", KeyFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.key", CAFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 21:42:24.065908  281862 addons.go:238] Setting addon default-storageclass=true in "test-preload-497787"
	W0214 21:42:24.065926  281862 addons.go:247] addon default-storageclass should already be in state true
	I0214 21:42:24.065952  281862 host.go:66] Checking if "test-preload-497787" exists ...
	I0214 21:42:24.066255  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:42:24.066292  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:42:24.078073  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0214 21:42:24.078492  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:42:24.079106  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:42:24.079127  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:42:24.079502  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:42:24.079698  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetState
	I0214 21:42:24.080811  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0214 21:42:24.081292  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:24.081385  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:42:24.081772  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:42:24.081799  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:42:24.082160  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:42:24.082615  281862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:42:24.082665  281862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:42:24.082917  281862 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:42:24.084087  281862 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:42:24.084102  281862 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 21:42:24.084116  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:24.086777  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:24.087174  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:24.087208  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:24.087365  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:24.087528  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:24.087681  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:24.087807  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:24.137413  281862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0214 21:42:24.137871  281862 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:42:24.138330  281862 main.go:141] libmachine: Using API Version  1
	I0214 21:42:24.138349  281862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:42:24.138668  281862 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:42:24.138846  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetState
	I0214 21:42:24.140331  281862 main.go:141] libmachine: (test-preload-497787) Calling .DriverName
	I0214 21:42:24.140529  281862 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 21:42:24.140544  281862 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 21:42:24.140562  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHHostname
	I0214 21:42:24.143496  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:24.143835  281862 main.go:141] libmachine: (test-preload-497787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:ac:c9", ip: ""} in network mk-test-preload-497787: {Iface:virbr1 ExpiryTime:2025-02-14 22:41:54 +0000 UTC Type:0 Mac:52:54:00:9b:ac:c9 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:test-preload-497787 Clientid:01:52:54:00:9b:ac:c9}
	I0214 21:42:24.143879  281862 main.go:141] libmachine: (test-preload-497787) DBG | domain test-preload-497787 has defined IP address 192.168.39.128 and MAC address 52:54:00:9b:ac:c9 in network mk-test-preload-497787
	I0214 21:42:24.144039  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHPort
	I0214 21:42:24.144224  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHKeyPath
	I0214 21:42:24.144377  281862 main.go:141] libmachine: (test-preload-497787) Calling .GetSSHUsername
	I0214 21:42:24.144532  281862 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/test-preload-497787/id_rsa Username:docker}
	I0214 21:42:24.358754  281862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:42:24.388331  281862 node_ready.go:35] waiting up to 6m0s for node "test-preload-497787" to be "Ready" ...
	I0214 21:42:24.557822  281862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 21:42:24.573416  281862 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 21:42:25.535740  281862 main.go:141] libmachine: Making call to close driver server
	I0214 21:42:25.535774  281862 main.go:141] libmachine: (test-preload-497787) Calling .Close
	I0214 21:42:25.536193  281862 main.go:141] libmachine: (test-preload-497787) DBG | Closing plugin on server side
	I0214 21:42:25.536195  281862 main.go:141] libmachine: Successfully made call to close driver server
	I0214 21:42:25.536222  281862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 21:42:25.536243  281862 main.go:141] libmachine: Making call to close driver server
	I0214 21:42:25.536256  281862 main.go:141] libmachine: (test-preload-497787) Calling .Close
	I0214 21:42:25.536546  281862 main.go:141] libmachine: Successfully made call to close driver server
	I0214 21:42:25.536568  281862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 21:42:25.546021  281862 main.go:141] libmachine: Making call to close driver server
	I0214 21:42:25.546041  281862 main.go:141] libmachine: (test-preload-497787) Calling .Close
	I0214 21:42:25.546236  281862 main.go:141] libmachine: Successfully made call to close driver server
	I0214 21:42:25.546256  281862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 21:42:25.546262  281862 main.go:141] libmachine: (test-preload-497787) DBG | Closing plugin on server side
	I0214 21:42:25.582553  281862 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.009094654s)
	I0214 21:42:25.582603  281862 main.go:141] libmachine: Making call to close driver server
	I0214 21:42:25.582619  281862 main.go:141] libmachine: (test-preload-497787) Calling .Close
	I0214 21:42:25.582880  281862 main.go:141] libmachine: Successfully made call to close driver server
	I0214 21:42:25.582895  281862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 21:42:25.582912  281862 main.go:141] libmachine: Making call to close driver server
	I0214 21:42:25.582923  281862 main.go:141] libmachine: (test-preload-497787) Calling .Close
	I0214 21:42:25.583125  281862 main.go:141] libmachine: Successfully made call to close driver server
	I0214 21:42:25.583144  281862 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 21:42:25.584719  281862 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0214 21:42:25.585873  281862 addons.go:514] duration metric: took 1.540639891s for enable addons: enabled=[default-storageclass storage-provisioner]
	W0214 21:42:26.394399  281862 node_ready.go:57] node "test-preload-497787" has "Ready":"False" status (will retry)
	W0214 21:42:28.892195  281862 node_ready.go:57] node "test-preload-497787" has "Ready":"False" status (will retry)
	W0214 21:42:31.393164  281862 node_ready.go:57] node "test-preload-497787" has "Ready":"False" status (will retry)
	I0214 21:42:32.891973  281862 node_ready.go:49] node "test-preload-497787" is "Ready"
	I0214 21:42:32.892008  281862 node_ready.go:38] duration metric: took 8.503630796s for node "test-preload-497787" to be "Ready" ...
	I0214 21:42:32.892035  281862 api_server.go:52] waiting for apiserver process to appear ...
	I0214 21:42:32.892085  281862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:42:32.908036  281862 api_server.go:72] duration metric: took 8.862863963s to wait for apiserver process to appear ...
	I0214 21:42:32.908060  281862 api_server.go:88] waiting for apiserver healthz status ...
	I0214 21:42:32.908081  281862 api_server.go:253] Checking apiserver healthz at https://192.168.39.128:8443/healthz ...
	I0214 21:42:32.913038  281862 api_server.go:279] https://192.168.39.128:8443/healthz returned 200:
	ok
	I0214 21:42:32.913813  281862 api_server.go:141] control plane version: v1.24.4
	I0214 21:42:32.913836  281862 api_server.go:131] duration metric: took 5.768581ms to wait for apiserver health ...
	I0214 21:42:32.913845  281862 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 21:42:32.917044  281862 system_pods.go:59] 7 kube-system pods found
	I0214 21:42:32.917073  281862 system_pods.go:61] "coredns-6d4b75cb6d-wgdqk" [590a6e91-eb8a-467c-bc35-2a02056a582b] Running
	I0214 21:42:32.917080  281862 system_pods.go:61] "etcd-test-preload-497787" [7fde59bb-4aec-45ac-907f-df4ba6fe4b97] Running
	I0214 21:42:32.917095  281862 system_pods.go:61] "kube-apiserver-test-preload-497787" [4e919959-53f4-4ec6-a8a4-1767015013c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 21:42:32.917105  281862 system_pods.go:61] "kube-controller-manager-test-preload-497787" [a3b13eb5-6d4f-4a78-8a0f-a76dd60e6221] Running
	I0214 21:42:32.917114  281862 system_pods.go:61] "kube-proxy-4fsqn" [0a9e5652-0991-419d-a6f4-75341aa44455] Running
	I0214 21:42:32.917124  281862 system_pods.go:61] "kube-scheduler-test-preload-497787" [53b8ef06-0926-40f6-9c9f-cabf403d0de5] Running
	I0214 21:42:32.917133  281862 system_pods.go:61] "storage-provisioner" [16d48314-505e-4bde-825c-23e2606ed1eb] Running
	I0214 21:42:32.917141  281862 system_pods.go:74] duration metric: took 3.28783ms to wait for pod list to return data ...
	I0214 21:42:32.917152  281862 default_sa.go:34] waiting for default service account to be created ...
	I0214 21:42:32.919030  281862 default_sa.go:45] found service account: "default"
	I0214 21:42:32.919049  281862 default_sa.go:55] duration metric: took 1.888111ms for default service account to be created ...
	I0214 21:42:32.919056  281862 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 21:42:32.922074  281862 system_pods.go:86] 7 kube-system pods found
	I0214 21:42:32.922099  281862 system_pods.go:89] "coredns-6d4b75cb6d-wgdqk" [590a6e91-eb8a-467c-bc35-2a02056a582b] Running
	I0214 21:42:32.922104  281862 system_pods.go:89] "etcd-test-preload-497787" [7fde59bb-4aec-45ac-907f-df4ba6fe4b97] Running
	I0214 21:42:32.922111  281862 system_pods.go:89] "kube-apiserver-test-preload-497787" [4e919959-53f4-4ec6-a8a4-1767015013c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 21:42:32.922118  281862 system_pods.go:89] "kube-controller-manager-test-preload-497787" [a3b13eb5-6d4f-4a78-8a0f-a76dd60e6221] Running
	I0214 21:42:32.922126  281862 system_pods.go:89] "kube-proxy-4fsqn" [0a9e5652-0991-419d-a6f4-75341aa44455] Running
	I0214 21:42:32.922139  281862 system_pods.go:89] "kube-scheduler-test-preload-497787" [53b8ef06-0926-40f6-9c9f-cabf403d0de5] Running
	I0214 21:42:32.922145  281862 system_pods.go:89] "storage-provisioner" [16d48314-505e-4bde-825c-23e2606ed1eb] Running
	I0214 21:42:32.922151  281862 system_pods.go:126] duration metric: took 3.089885ms to wait for k8s-apps to be running ...
	I0214 21:42:32.922157  281862 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 21:42:32.922195  281862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:42:32.936422  281862 system_svc.go:56] duration metric: took 14.25807ms WaitForService to wait for kubelet
	I0214 21:42:32.936446  281862 kubeadm.go:578] duration metric: took 8.891276002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:42:32.936467  281862 node_conditions.go:102] verifying NodePressure condition ...
	I0214 21:42:32.939296  281862 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 21:42:32.939322  281862 node_conditions.go:123] node cpu capacity is 2
	I0214 21:42:32.939343  281862 node_conditions.go:105] duration metric: took 2.861924ms to run NodePressure ...
	I0214 21:42:32.939358  281862 start.go:241] waiting for startup goroutines ...
	I0214 21:42:32.939365  281862 start.go:246] waiting for cluster config update ...
	I0214 21:42:32.939380  281862 start.go:255] writing updated cluster config ...
	I0214 21:42:32.939653  281862 ssh_runner.go:195] Run: rm -f paused
	I0214 21:42:32.943964  281862 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 21:42:32.944593  281862 kapi.go:59] client config for test-preload-497787: &rest.Config{Host:"https://192.168.39.128:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.crt", KeyFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/profiles/test-preload-497787/client.key", CAFile:"/home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 21:42:32.947449  281862 pod_ready.go:83] waiting for pod "coredns-6d4b75cb6d-wgdqk" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:32.951797  281862 pod_ready.go:94] pod "coredns-6d4b75cb6d-wgdqk" is "Ready"
	I0214 21:42:32.951820  281862 pod_ready.go:86] duration metric: took 4.350887ms for pod "coredns-6d4b75cb6d-wgdqk" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:32.954251  281862 pod_ready.go:83] waiting for pod "etcd-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:32.958349  281862 pod_ready.go:94] pod "etcd-test-preload-497787" is "Ready"
	I0214 21:42:32.958370  281862 pod_ready.go:86] duration metric: took 4.098844ms for pod "etcd-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:32.961031  281862 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:34.967737  281862 pod_ready.go:94] pod "kube-apiserver-test-preload-497787" is "Ready"
	I0214 21:42:34.967771  281862 pod_ready.go:86] duration metric: took 2.006719071s for pod "kube-apiserver-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:34.971825  281862 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:34.976079  281862 pod_ready.go:94] pod "kube-controller-manager-test-preload-497787" is "Ready"
	I0214 21:42:34.976134  281862 pod_ready.go:86] duration metric: took 4.253441ms for pod "kube-controller-manager-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:35.147926  281862 pod_ready.go:83] waiting for pod "kube-proxy-4fsqn" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:35.548299  281862 pod_ready.go:94] pod "kube-proxy-4fsqn" is "Ready"
	I0214 21:42:35.548327  281862 pod_ready.go:86] duration metric: took 400.375485ms for pod "kube-proxy-4fsqn" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:35.748271  281862 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:36.147378  281862 pod_ready.go:94] pod "kube-scheduler-test-preload-497787" is "Ready"
	I0214 21:42:36.147411  281862 pod_ready.go:86] duration metric: took 399.116837ms for pod "kube-scheduler-test-preload-497787" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:42:36.147424  281862 pod_ready.go:40] duration metric: took 3.203434969s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 21:42:36.190638  281862 start.go:607] kubectl: 1.32.2, cluster: 1.24.4 (minor skew: 8)
	I0214 21:42:36.192059  281862 out.go:201] 
	W0214 21:42:36.193265  281862 out.go:270] ! /usr/local/bin/kubectl is version 1.32.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0214 21:42:36.194461  281862 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0214 21:42:36.195573  281862 out.go:177] * Done! kubectl is now configured to use "test-preload-497787" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.193877568Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5d304247-1d50-4513-b555-88625142e5c5 name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.193962181Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1739569337968567079,StartedAt:1739569338139791658,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.24.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b7fe4407616caadf9f6c9e684eab7f97/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b7fe4407616caadf9f6c9e684eab7f97/containers/kube-scheduler/7d445e9a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-test-preload-497787_b7fe4407616caadf9f6c9e684eab7f97/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources
{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5d304247-1d50-4513-b555-88625142e5c5 name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.194503504Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c85248a9-be65-48e1-8c1f-3413f73be64f name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.194608803Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1739569337935876324,StartedAt:1739569338027378956,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.24.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,},Annotations:map[string]string{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/acf6ce14045c44a413d533fcf3646417/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/acf6ce14045c44a413d533fcf3646417/containers/kube-apiserver/5b200502,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Con
tainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-test-preload-497787_acf6ce14045c44a413d533fcf3646417/kube-apiserver/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c85248a9-be65-48e1-8c1f-3413f73be64f name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.195130572Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=32cc6133-f955-4287-84a4-f3a66e6ac6ac name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.195237439Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1739569337896012967,StartedAt:1739569337994963690,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.24.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5425c4b0d194a3367e4f4839be1fe166/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5425c4b0d194a3367e4f4839be1fe166/containers/kube-controller-manager/c1076e2b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRI
VATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-test-preload-497787_5425c4b0d194a3367e4f4839be1fe166/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,C
pusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=32cc6133-f955-4287-84a4-f3a66e6ac6ac name=/runtime.v1.RuntimeService/ContainerStatus
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.199152384Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cc6b7ced-a6f3-4991-a678-d8031dafaa0c name=/runtime.v1.ImageService/ListImages
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.199795714Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,RepoTags:[k8s.gcr.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-apiserver:v1.24.4],RepoDigests:[k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857 k8s.gcr.io/kube-apiserver@sha256:74496d788bad4b343b2a2ead2b4ac8f4d0d99c45c451b51c076f22e52b84f1e5 k8s.gcr.io/kube-apiserver@sha256:aa1ef03e6734883f677c768fa970d54c8ae490aad157b34c91e73adb7e4d5a90 registry.k8s.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857 registry.k8s.io/kube-apiserver@sha256:74496d788bad4b343b2a2ead2b4ac8f4d0d99c45c451b51c076f22e52b84f1e5 registry.k8s.io/kube-apiserver@sha256:aa1ef03e6734883f677c768fa970d54c8ae490aad157b34c91e73adb7e4d5a90],Size_:131097841,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:1f99c
b6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,RepoTags:[k8s.gcr.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4],RepoDigests:[k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891 k8s.gcr.io/kube-controller-manager@sha256:da588c9f0e65e93317f5e016603d1ed7466427e9e0cf8b028c505bf30837f7dd k8s.gcr.io/kube-controller-manager@sha256:f9400b11d780871e4e87cac8a8d4f8fc6bb83d7793b58981020b43be55f71cb9 registry.k8s.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891 registry.k8s.io/kube-controller-manager@sha256:da588c9f0e65e93317f5e016603d1ed7466427e9e0cf8b028c505bf30837f7dd registry.k8s.io/kube-controller-manager@sha256:f9400b11d780871e4e87cac8a8d4f8fc6bb83d7793b58981020b43be55f71cb9],Size_:120743002,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,RepoTags:[k8s.gcr.io/kube-scheduler:v1.2
4.4 registry.k8s.io/kube-scheduler:v1.24.4],RepoDigests:[k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2 k8s.gcr.io/kube-scheduler@sha256:a16e4ce348403bc65bc6b755aef81e4970685c4e32fc398b10e49de15993ba21 k8s.gcr.io/kube-scheduler@sha256:cf1e1f85916287003e82d852a709917e200afd5caca04499d525ee98c21677bb registry.k8s.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2 registry.k8s.io/kube-scheduler@sha256:a16e4ce348403bc65bc6b755aef81e4970685c4e32fc398b10e49de15993ba21 registry.k8s.io/kube-scheduler@sha256:cf1e1f85916287003e82d852a709917e200afd5caca04499d525ee98c21677bb],Size_:52343896,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,RepoTags:[k8s.gcr.io/kube-proxy:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4],RepoDigests:[k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386 k8s.gcr.io/kube-proxy@sha256:b
fac4b9fbf43ee6e1b30f90bc5a889067a4b4081927b4b6d322ed107a8549ab0 k8s.gcr.io/kube-proxy@sha256:fec80877f53c7999f8268ab856ef2517f01a72b5de910c77f921ef784d44617f registry.k8s.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386 registry.k8s.io/kube-proxy@sha256:bfac4b9fbf43ee6e1b30f90bc5a889067a4b4081927b4b6d322ed107a8549ab0 registry.k8s.io/kube-proxy@sha256:fec80877f53c7999f8268ab856ef2517f01a72b5de910c77f921ef784d44617f],Size_:111862619,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165,RepoTags:[k8s.gcr.io/pause:3.7 registry.k8s.io/pause:3.7],RepoDigests:[k8s.gcr.io/pause@sha256:7be59e780e44025b8bdfe535f04a7e83ea03dd949037ebfcfdbf5880c8f87ac7 k8s.gcr.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c k8s.gcr.io/pause@sha256:f81611a21cf91214c1ea751c5b525931a0e2ebabe62b3937b6158039ff6f922d registry.k8s.io/pause@sha256:7be59e780e44025b8bdfe535f04a7e83ea03dd949037ebfcfdbf5880c8f87ac7 reg
istry.k8s.io/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c registry.k8s.io/pause@sha256:f81611a21cf91214c1ea751c5b525931a0e2ebabe62b3937b6158039ff6f922d],Size_:718423,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,RepoTags:[k8s.gcr.io/etcd:3.5.3-0 registry.k8s.io/etcd:3.5.3-0],RepoDigests:[k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 k8s.gcr.io/etcd@sha256:533631a3c25663124e848280973b1a5d5ae34f8766fef9b6b839d4b08c893e38 k8s.gcr.io/etcd@sha256:678382ed340f6996ad40cdba4a4745a2ada41ed9c322c026a2a695338a93dcbe registry.k8s.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 registry.k8s.io/etcd@sha256:533631a3c25663124e848280973b1a5d5ae34f8766fef9b6b839d4b08c893e38 registry.k8s.io/etcd@sha256:678382ed340f6996ad40cdba4a4745a2ada41ed9c322c026a2a695338a93dcbe],Size_:300857875,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinne
d:false,},&Image{Id:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,RepoTags:[k8s.gcr.io/coredns/coredns:v1.8.6 registry.k8s.io/coredns/coredns:v1.8.6],RepoDigests:[k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e k8s.gcr.io/coredns/coredns@sha256:8916c89e1538ea3941b58847e448a2c6d940c01b8e716b20423d2d8b189d3972 k8s.gcr.io/coredns/coredns@sha256:a0d77904d929b640f13c5098c70950d084042bed9ef73b60bfe00974a84ab722 registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns@sha256:8916c89e1538ea3941b58847e448a2c6d940c01b8e716b20423d2d8b189d3972 registry.k8s.io/coredns/coredns@sha256:a0d77904d929b640f13c5098c70950d084042bed9ef73b60bfe00974a84ab722],Size_:46959895,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provi
sioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:d921cee8494827575ce8b9cc6cf7dae988b6378ce3f62217bf430467916529b9,RepoTags:[docker.io/kindest/kindnetd:v20220726-ed811e41],RepoDigests:[docker.io/kindest/kindnetd@sha256:5240e7ff1fefade59846259c1edabad82fe4c642c66b7850947015d1dd699251 docker.io/kindest/kindnetd@sha256:e2d4d675dcf28a90102ad5219b75c5a0ee096c4321247dfae31dd1467611a9fb],Size_:63344219,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=cc6b7ced-a6f3-4991-a678-d8031dafaa0c name=/runtime.v1.ImageService/ListImages
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.217392991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ca46581-f3c2-46e0-9569-304af9e88ce9 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.217439053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ca46581-f3c2-46e0-9569-304af9e88ce9 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.219620233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=464b9899-0bc1-42cd-824b-4c37288ee4ab name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.220053385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569357220035696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=464b9899-0bc1-42cd-824b-4c37288ee4ab name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.220694884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fe56237-4ec0-42b3-beea-4c8d0e70ccaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.220800036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fe56237-4ec0-42b3-beea-4c8d0e70ccaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.220972646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7300c0c84061474dd2b64f9568ce3dfe74a66ce12bc88bf0f1b9c82b6d5f411,PodSandboxId:0dd478127f0d2990671001fdcff5aa2cb41482c8a1a48275d93d04d707c73826,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739569351370606239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wgdqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a6e91-eb8a-467c-bc35-2a02056a582b,},Annotations:map[string]string{io.kubernetes.container.hash: 18a01380,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184da0c26c9612c30ee38cce46fc89e8c85b2c0eb332de43f32a590aa0766926,PodSandboxId:6283a74e76adde4f41146209eb72350661c3d81d07e8236e0e9225c2d5b002a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739569344322950651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 16d48314-505e-4bde-825c-23e2606ed1eb,},Annotations:map[string]string{io.kubernetes.container.hash: 7f20e4d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d65d3930f6f6206c22965240d855bb2c40966f813b0be8fa628368925783ad,PodSandboxId:622243adec1c4885dee176dcae6164521a385f4a79b13f3b210fbdce9fedab35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739569343864909613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fsqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
9e5652-0991-419d-a6f4-75341aa44455,},Annotations:map[string]string{io.kubernetes.container.hash: 5be4fda2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246edf6198d5cefa31bf207a81fb300d2b8b9a79a62c968d49ea5d5d69f13af5,PodSandboxId:d2fb0bf6172bf42904e99affb2e06951dc3c1fcb61b2b604c62548f9fc10f5a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739569337849647223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b892b686f16849968c71b969924b2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2f1fa9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8,PodSandboxId:f81677deaa58031c6488cd483463ba6de54546aedab7f19751955a066127f767,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739569337908030979,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108,PodSandboxId:b988fb6ca062f92431844e54cf1294e10a517f3231f4d30f8e3245d1582a70c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739569337860906778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2,PodSandboxId:e9052808958e1ed0e01600b589ee2ac382eef681f7d23614a94a8e27a77cd785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739569337811812219,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fe56237-4ec0-42b3-beea-4c8d0e70ccaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.237042632Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09dc839d-5d12-4e24-9f74-07e268a54532 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.237678099Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0dd478127f0d2990671001fdcff5aa2cb41482c8a1a48275d93d04d707c73826,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-wgdqk,Uid:590a6e91-eb8a-467c-bc35-2a02056a582b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569351141517027,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-wgdqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a6e91-eb8a-467c-bc35-2a02056a582b,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T21:42:23.120660049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6283a74e76adde4f41146209eb72350661c3d81d07e8236e0e9225c2d5b002a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:16d48314-505e-4bde-825c-23e2606ed1eb,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569344026831478,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16d48314-505e-4bde-825c-23e2606ed1eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-14T21:42:23.120658910Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:622243adec1c4885dee176dcae6164521a385f4a79b13f3b210fbdce9fedab35,Metadata:&PodSandboxMetadata{Name:kube-proxy-4fsqn,Uid:0a9e5652-0991-419d-a6f4-75341aa44455,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569343735570423,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4fsqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a9e5652-0991-419d-a6f4-75341aa44455,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T21:42:23.120656527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f81677deaa58031c6488cd483463ba6de54546aedab7f19751955a066127f767,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-497787,Uid:b7fe440
7616caadf9f6c9e684eab7f97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337675148907,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b7fe4407616caadf9f6c9e684eab7f97,kubernetes.io/config.seen: 2025-02-14T21:42:17.138662692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9052808958e1ed0e01600b589ee2ac382eef681f7d23614a94a8e27a77cd785,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-497787,Uid:5425c4b0d194a3367e4f4839be1fe166,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337671133239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5425c4b0d194a3367e4f4839be1fe166,kubernetes.io/config.seen: 2025-02-14T21:42:17.138661614Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b988fb6ca062f92431844e54cf1294e10a517f3231f4d30f8e3245d1582a70c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-497787,Uid:acf6ce14045c44a413d533fcf3646417,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337669396733,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.128:8443,kubernetes.io/config.hash: acf6ce14045c44a413d533fcf3646417,kub
ernetes.io/config.seen: 2025-02-14T21:42:17.138632561Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2fb0bf6172bf42904e99affb2e06951dc3c1fcb61b2b604c62548f9fc10f5a0,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-497787,Uid:8b892b686f16849968c71b969924b2d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337665866246,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b892b686f16849968c71b969924b2d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.128:2379,kubernetes.io/config.hash: 8b892b686f16849968c71b969924b2d1,kubernetes.io/config.seen: 2025-02-14T21:42:17.155149270Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=09dc839d-5d12-4e24-9f74-07e268a54532 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.238651837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63927c51-6168-456d-a238-e4d11be365a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.238736716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63927c51-6168-456d-a238-e4d11be365a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.238913761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7300c0c84061474dd2b64f9568ce3dfe74a66ce12bc88bf0f1b9c82b6d5f411,PodSandboxId:0dd478127f0d2990671001fdcff5aa2cb41482c8a1a48275d93d04d707c73826,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739569351370606239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wgdqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a6e91-eb8a-467c-bc35-2a02056a582b,},Annotations:map[string]string{io.kubernetes.container.hash: 18a01380,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184da0c26c9612c30ee38cce46fc89e8c85b2c0eb332de43f32a590aa0766926,PodSandboxId:6283a74e76adde4f41146209eb72350661c3d81d07e8236e0e9225c2d5b002a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739569344322950651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 16d48314-505e-4bde-825c-23e2606ed1eb,},Annotations:map[string]string{io.kubernetes.container.hash: 7f20e4d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d65d3930f6f6206c22965240d855bb2c40966f813b0be8fa628368925783ad,PodSandboxId:622243adec1c4885dee176dcae6164521a385f4a79b13f3b210fbdce9fedab35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739569343864909613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fsqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
9e5652-0991-419d-a6f4-75341aa44455,},Annotations:map[string]string{io.kubernetes.container.hash: 5be4fda2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246edf6198d5cefa31bf207a81fb300d2b8b9a79a62c968d49ea5d5d69f13af5,PodSandboxId:d2fb0bf6172bf42904e99affb2e06951dc3c1fcb61b2b604c62548f9fc10f5a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739569337849647223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b892b686f16849968c71b969924b2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2f1fa9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8,PodSandboxId:f81677deaa58031c6488cd483463ba6de54546aedab7f19751955a066127f767,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739569337908030979,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108,PodSandboxId:b988fb6ca062f92431844e54cf1294e10a517f3231f4d30f8e3245d1582a70c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739569337860906778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2,PodSandboxId:e9052808958e1ed0e01600b589ee2ac382eef681f7d23614a94a8e27a77cd785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739569337811812219,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63927c51-6168-456d-a238-e4d11be365a7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.241098796Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e6cafa2-93d1-479d-b6ad-2ab933f1d31c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.241386483Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0dd478127f0d2990671001fdcff5aa2cb41482c8a1a48275d93d04d707c73826,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-wgdqk,Uid:590a6e91-eb8a-467c-bc35-2a02056a582b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569351141517027,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-wgdqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a6e91-eb8a-467c-bc35-2a02056a582b,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T21:42:23.120660049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6283a74e76adde4f41146209eb72350661c3d81d07e8236e0e9225c2d5b002a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:16d48314-505e-4bde-825c-23e2606ed1eb,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569344026831478,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16d48314-505e-4bde-825c-23e2606ed1eb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-14T21:42:23.120658910Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:622243adec1c4885dee176dcae6164521a385f4a79b13f3b210fbdce9fedab35,Metadata:&PodSandboxMetadata{Name:kube-proxy-4fsqn,Uid:0a9e5652-0991-419d-a6f4-75341aa44455,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569343735570423,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4fsqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a9e5652-0991-419d-a6f4-75341aa44455,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-14T21:42:23.120656527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f81677deaa58031c6488cd483463ba6de54546aedab7f19751955a066127f767,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-497787,Uid:b7fe440
7616caadf9f6c9e684eab7f97,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337675148907,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b7fe4407616caadf9f6c9e684eab7f97,kubernetes.io/config.seen: 2025-02-14T21:42:17.138662692Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9052808958e1ed0e01600b589ee2ac382eef681f7d23614a94a8e27a77cd785,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-497787,Uid:5425c4b0d194a3367e4f4839be1fe166,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337671133239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5425c4b0d194a3367e4f4839be1fe166,kubernetes.io/config.seen: 2025-02-14T21:42:17.138661614Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b988fb6ca062f92431844e54cf1294e10a517f3231f4d30f8e3245d1582a70c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-497787,Uid:acf6ce14045c44a413d533fcf3646417,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337669396733,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.128:8443,kubernetes.io/config.hash: acf6ce14045c44a413d533fcf3646417,kub
ernetes.io/config.seen: 2025-02-14T21:42:17.138632561Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d2fb0bf6172bf42904e99affb2e06951dc3c1fcb61b2b604c62548f9fc10f5a0,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-497787,Uid:8b892b686f16849968c71b969924b2d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739569337665866246,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b892b686f16849968c71b969924b2d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.128:2379,kubernetes.io/config.hash: 8b892b686f16849968c71b969924b2d1,kubernetes.io/config.seen: 2025-02-14T21:42:17.155149270Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2e6cafa2-93d1-479d-b6ad-2ab933f1d31c name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.244620308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c277ef12-cb4b-4cb3-9a35-4a066005c80e name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.245387353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c277ef12-cb4b-4cb3-9a35-4a066005c80e name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:42:37 test-preload-497787 crio[667]: time="2025-02-14 21:42:37.246054608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d7300c0c84061474dd2b64f9568ce3dfe74a66ce12bc88bf0f1b9c82b6d5f411,PodSandboxId:0dd478127f0d2990671001fdcff5aa2cb41482c8a1a48275d93d04d707c73826,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739569351370606239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-wgdqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a6e91-eb8a-467c-bc35-2a02056a582b,},Annotations:map[string]string{io.kubernetes.container.hash: 18a01380,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:184da0c26c9612c30ee38cce46fc89e8c85b2c0eb332de43f32a590aa0766926,PodSandboxId:6283a74e76adde4f41146209eb72350661c3d81d07e8236e0e9225c2d5b002a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739569344322950651,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 16d48314-505e-4bde-825c-23e2606ed1eb,},Annotations:map[string]string{io.kubernetes.container.hash: 7f20e4d0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03d65d3930f6f6206c22965240d855bb2c40966f813b0be8fa628368925783ad,PodSandboxId:622243adec1c4885dee176dcae6164521a385f4a79b13f3b210fbdce9fedab35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739569343864909613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fsqn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a
9e5652-0991-419d-a6f4-75341aa44455,},Annotations:map[string]string{io.kubernetes.container.hash: 5be4fda2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:246edf6198d5cefa31bf207a81fb300d2b8b9a79a62c968d49ea5d5d69f13af5,PodSandboxId:d2fb0bf6172bf42904e99affb2e06951dc3c1fcb61b2b604c62548f9fc10f5a0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739569337849647223,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b892b686f16849968c71b969924b2d1,},Anno
tations:map[string]string{io.kubernetes.container.hash: 2f1fa9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8,PodSandboxId:f81677deaa58031c6488cd483463ba6de54546aedab7f19751955a066127f767,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739569337908030979,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7fe4407616caadf9f6c9e684eab7f97,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108,PodSandboxId:b988fb6ca062f92431844e54cf1294e10a517f3231f4d30f8e3245d1582a70c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739569337860906778,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf6ce14045c44a413d533fcf3646417,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: ef0afd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2,PodSandboxId:e9052808958e1ed0e01600b589ee2ac382eef681f7d23614a94a8e27a77cd785,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739569337811812219,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-497787,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5425c4b0d194a3367e4f4839be1fe166,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c277ef12-cb4b-4cb3-9a35-4a066005c80e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d7300c0c84061       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   0dd478127f0d2       coredns-6d4b75cb6d-wgdqk
	184da0c26c961       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   6283a74e76add       storage-provisioner
	03d65d3930f6f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   622243adec1c4       kube-proxy-4fsqn
	ffbd185789a15       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   f81677deaa580       kube-scheduler-test-preload-497787
	0b1a2e1a09645       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   b988fb6ca062f       kube-apiserver-test-preload-497787
	246edf6198d5c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   d2fb0bf6172bf       etcd-test-preload-497787
	bb2e87f46b5ff       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   e9052808958e1       kube-controller-manager-test-preload-497787
	
	
	==> coredns [d7300c0c84061474dd2b64f9568ce3dfe74a66ce12bc88bf0f1b9c82b6d5f411] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47768 - 50461 "HINFO IN 4779015358622389261.7956686755063609579. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.077667195s
	
	
	==> describe nodes <==
	Name:               test-preload-497787
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-497787
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=test-preload-497787
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_39_10_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:39:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-497787
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:42:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:42:32 +0000   Fri, 14 Feb 2025 21:39:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:42:32 +0000   Fri, 14 Feb 2025 21:39:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:42:32 +0000   Fri, 14 Feb 2025 21:39:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 21:42:32 +0000   Fri, 14 Feb 2025 21:42:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.128
	  Hostname:    test-preload-497787
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87e6cda8c4af466791ba8e388432573c
	  System UUID:                87e6cda8-c4af-4667-91ba-8e388432573c
	  Boot ID:                    c9edf277-cd67-4255-a4ca-b083e70d385b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wgdqk                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m14s
	  kube-system                 etcd-test-preload-497787                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m27s
	  kube-system                 kube-apiserver-test-preload-497787             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kube-controller-manager-test-preload-497787    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kube-proxy-4fsqn                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 kube-scheduler-test-preload-497787             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12s                    kube-proxy       
	  Normal  Starting                 3m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m34s (x4 over 3m34s)  kubelet          Node test-preload-497787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m34s (x4 over 3m34s)  kubelet          Node test-preload-497787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m34s (x4 over 3m34s)  kubelet          Node test-preload-497787 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet          Node test-preload-497787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s                  kubelet          Node test-preload-497787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s                  kubelet          Node test-preload-497787 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m17s                  kubelet          Node test-preload-497787 status is now: NodeReady
	  Normal  RegisteredNode           3m14s                  node-controller  Node test-preload-497787 event: Registered Node test-preload-497787 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)      kubelet          Node test-preload-497787 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)      kubelet          Node test-preload-497787 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)      kubelet          Node test-preload-497787 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                     node-controller  Node test-preload-497787 event: Registered Node test-preload-497787 in Controller
	
	
	==> dmesg <==
	[Feb14 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051560] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039517] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.881993] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.641955] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.586007] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb14 21:42] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.056509] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060423] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.162812] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.145143] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.263124] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +11.363951] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.057145] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.067660] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +6.881228] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.394216] systemd-fstab-generator[1645]: Ignoring "noauto" option for root device
	[  +6.972237] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [246edf6198d5cefa31bf207a81fb300d2b8b9a79a62c968d49ea5d5d69f13af5] <==
	{"level":"info","ts":"2025-02-14T21:42:18.263Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fa515506e66f6916","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-14T21:42:18.266Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-14T21:42:18.271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 switched to configuration voters=(18037291470719772950)"}
	{"level":"info","ts":"2025-02-14T21:42:18.277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","added-peer-id":"fa515506e66f6916","added-peer-peer-urls":["https://192.168.39.128:2380"]}
	{"level":"info","ts":"2025-02-14T21:42:18.277Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b64da5b92548cbb8","local-member-id":"fa515506e66f6916","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:42:18.277Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:42:18.284Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:42:18.284Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2025-02-14T21:42:18.286Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.128:2380"}
	{"level":"info","ts":"2025-02-14T21:42:18.286Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fa515506e66f6916","initial-advertise-peer-urls":["https://192.168.39.128:2380"],"listen-peer-urls":["https://192.168.39.128:2380"],"advertise-client-urls":["https://192.168.39.128:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.128:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-14T21:42:18.286Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-14T21:42:19.730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-14T21:42:19.730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-14T21:42:19.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 received MsgPreVoteResp from fa515506e66f6916 at term 2"}
	{"level":"info","ts":"2025-02-14T21:42:19.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became candidate at term 3"}
	{"level":"info","ts":"2025-02-14T21:42:19.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 received MsgVoteResp from fa515506e66f6916 at term 3"}
	{"level":"info","ts":"2025-02-14T21:42:19.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fa515506e66f6916 became leader at term 3"}
	{"level":"info","ts":"2025-02-14T21:42:19.731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fa515506e66f6916 elected leader fa515506e66f6916 at term 3"}
	{"level":"info","ts":"2025-02-14T21:42:19.736Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fa515506e66f6916","local-member-attributes":"{Name:test-preload-497787 ClientURLs:[https://192.168.39.128:2379]}","request-path":"/0/members/fa515506e66f6916/attributes","cluster-id":"b64da5b92548cbb8","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:42:19.736Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:42:19.738Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:42:19.738Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:42:19.738Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:42:19.739Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-14T21:42:19.739Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.128:2379"}
	
	
	==> kernel <==
	 21:42:37 up 0 min,  0 users,  load average: 0.62, 0.19, 0.07
	Linux test-preload-497787 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b1a2e1a09645cb41d2fea4ca54e134b8b45b889381aa3da9c75c63378a43108] <==
	I0214 21:42:22.011718       1 establishing_controller.go:76] Starting EstablishingController
	I0214 21:42:22.016658       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0214 21:42:22.016789       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0214 21:42:22.016830       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0214 21:42:22.032369       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0214 21:42:22.032378       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0214 21:42:22.116162       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0214 21:42:22.116246       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0214 21:42:22.117095       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0214 21:42:22.128800       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0214 21:42:22.132626       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0214 21:42:22.176493       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 21:42:22.190057       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 21:42:22.196743       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0214 21:42:22.198608       1 cache.go:39] Caches are synced for autoregister controller
	I0214 21:42:22.696137       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0214 21:42:22.992934       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 21:42:23.888170       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0214 21:42:23.905860       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0214 21:42:23.946431       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0214 21:42:23.981006       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 21:42:23.993421       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 21:42:24.676253       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0214 21:42:35.124385       1 controller.go:611] quota admission added evaluator for: endpoints
	I0214 21:42:35.132790       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb2e87f46b5ff26bf6f242299ad83617cc0a379ae310ed4131d0b5a37e1950d2] <==
	I0214 21:42:35.066376       1 shared_informer.go:262] Caches are synced for disruption
	I0214 21:42:35.066425       1 disruption.go:371] Sending events to api server.
	I0214 21:42:35.071410       1 shared_informer.go:262] Caches are synced for attach detach
	I0214 21:42:35.086266       1 shared_informer.go:262] Caches are synced for persistent volume
	I0214 21:42:35.090776       1 shared_informer.go:262] Caches are synced for GC
	I0214 21:42:35.093930       1 shared_informer.go:262] Caches are synced for daemon sets
	I0214 21:42:35.098618       1 shared_informer.go:262] Caches are synced for taint
	I0214 21:42:35.098765       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0214 21:42:35.098821       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0214 21:42:35.098924       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-497787. Assuming now as a timestamp.
	I0214 21:42:35.098967       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0214 21:42:35.099154       1 event.go:294] "Event occurred" object="test-preload-497787" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-497787 event: Registered Node test-preload-497787 in Controller"
	I0214 21:42:35.101391       1 shared_informer.go:262] Caches are synced for stateful set
	I0214 21:42:35.101763       1 shared_informer.go:262] Caches are synced for resource quota
	I0214 21:42:35.106791       1 shared_informer.go:262] Caches are synced for resource quota
	I0214 21:42:35.112388       1 shared_informer.go:262] Caches are synced for PVC protection
	I0214 21:42:35.116664       1 shared_informer.go:262] Caches are synced for endpoint
	I0214 21:42:35.124767       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0214 21:42:35.125904       1 shared_informer.go:262] Caches are synced for ephemeral
	I0214 21:42:35.135045       1 shared_informer.go:262] Caches are synced for job
	I0214 21:42:35.138701       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0214 21:42:35.144492       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0214 21:42:35.548533       1 shared_informer.go:262] Caches are synced for garbage collector
	I0214 21:42:35.552896       1 shared_informer.go:262] Caches are synced for garbage collector
	I0214 21:42:35.552941       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [03d65d3930f6f6206c22965240d855bb2c40966f813b0be8fa628368925783ad] <==
	I0214 21:42:24.547192       1 node.go:163] Successfully retrieved node IP: 192.168.39.128
	I0214 21:42:24.547523       1 server_others.go:138] "Detected node IP" address="192.168.39.128"
	I0214 21:42:24.547595       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0214 21:42:24.631979       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0214 21:42:24.632228       1 server_others.go:206] "Using iptables Proxier"
	I0214 21:42:24.638368       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0214 21:42:24.639346       1 server.go:661] "Version info" version="v1.24.4"
	I0214 21:42:24.639516       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:42:24.642934       1 config.go:317] "Starting service config controller"
	I0214 21:42:24.644101       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0214 21:42:24.644156       1 config.go:226] "Starting endpoint slice config controller"
	I0214 21:42:24.644162       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0214 21:42:24.645570       1 config.go:444] "Starting node config controller"
	I0214 21:42:24.645603       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0214 21:42:24.745381       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0214 21:42:24.745454       1 shared_informer.go:262] Caches are synced for service config
	I0214 21:42:24.748466       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [ffbd185789a15d39e2088a798d6a51f5c7e7a7d630dac492036368c7f2ea17b8] <==
	I0214 21:42:19.117486       1 serving.go:348] Generated self-signed cert in-memory
	W0214 21:42:22.051475       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 21:42:22.051598       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 21:42:22.051626       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 21:42:22.051711       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 21:42:22.111648       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0214 21:42:22.111707       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:42:22.121461       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0214 21:42:22.121907       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 21:42:22.121973       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:42:22.122014       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0214 21:42:22.222973       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 21:42:22 test-preload-497787 kubelet[1128]: E0214 21:42:22.183974    1128 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Feb 14 21:42:22 test-preload-497787 kubelet[1128]: E0214 21:42:22.196354    1128 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.116605    1128 apiserver.go:52] "Watching apiserver"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.120840    1128 topology_manager.go:200] "Topology Admit Handler"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.120935    1128 topology_manager.go:200] "Topology Admit Handler"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.120967    1128 topology_manager.go:200] "Topology Admit Handler"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: E0214 21:42:23.123590    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wgdqk" podUID=590a6e91-eb8a-467c-bc35-2a02056a582b
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.203450    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cd6z\" (UniqueName: \"kubernetes.io/projected/16d48314-505e-4bde-825c-23e2606ed1eb-kube-api-access-6cd6z\") pod \"storage-provisioner\" (UID: \"16d48314-505e-4bde-825c-23e2606ed1eb\") " pod="kube-system/storage-provisioner"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.203763    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l7qm\" (UniqueName: \"kubernetes.io/projected/590a6e91-eb8a-467c-bc35-2a02056a582b-kube-api-access-8l7qm\") pod \"coredns-6d4b75cb6d-wgdqk\" (UID: \"590a6e91-eb8a-467c-bc35-2a02056a582b\") " pod="kube-system/coredns-6d4b75cb6d-wgdqk"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.203846    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/16d48314-505e-4bde-825c-23e2606ed1eb-tmp\") pod \"storage-provisioner\" (UID: \"16d48314-505e-4bde-825c-23e2606ed1eb\") " pod="kube-system/storage-provisioner"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.203909    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a9e5652-0991-419d-a6f4-75341aa44455-xtables-lock\") pod \"kube-proxy-4fsqn\" (UID: \"0a9e5652-0991-419d-a6f4-75341aa44455\") " pod="kube-system/kube-proxy-4fsqn"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.203958    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a9e5652-0991-419d-a6f4-75341aa44455-kube-proxy\") pod \"kube-proxy-4fsqn\" (UID: \"0a9e5652-0991-419d-a6f4-75341aa44455\") " pod="kube-system/kube-proxy-4fsqn"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.204011    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a9e5652-0991-419d-a6f4-75341aa44455-lib-modules\") pod \"kube-proxy-4fsqn\" (UID: \"0a9e5652-0991-419d-a6f4-75341aa44455\") " pod="kube-system/kube-proxy-4fsqn"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.204061    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhkt\" (UniqueName: \"kubernetes.io/projected/0a9e5652-0991-419d-a6f4-75341aa44455-kube-api-access-dlhkt\") pod \"kube-proxy-4fsqn\" (UID: \"0a9e5652-0991-419d-a6f4-75341aa44455\") " pod="kube-system/kube-proxy-4fsqn"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.204121    1128 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume\") pod \"coredns-6d4b75cb6d-wgdqk\" (UID: \"590a6e91-eb8a-467c-bc35-2a02056a582b\") " pod="kube-system/coredns-6d4b75cb6d-wgdqk"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: I0214 21:42:23.204288    1128 reconciler.go:159] "Reconciler: start to sync state"
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: E0214 21:42:23.306254    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: E0214 21:42:23.306573    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume podName:590a6e91-eb8a-467c-bc35-2a02056a582b nodeName:}" failed. No retries permitted until 2025-02-14 21:42:23.806536361 +0000 UTC m=+6.818856800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume") pod "coredns-6d4b75cb6d-wgdqk" (UID: "590a6e91-eb8a-467c-bc35-2a02056a582b") : object "kube-system"/"coredns" not registered
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: E0214 21:42:23.808507    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 14 21:42:23 test-preload-497787 kubelet[1128]: E0214 21:42:23.808558    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume podName:590a6e91-eb8a-467c-bc35-2a02056a582b nodeName:}" failed. No retries permitted until 2025-02-14 21:42:24.808545978 +0000 UTC m=+7.820866415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume") pod "coredns-6d4b75cb6d-wgdqk" (UID: "590a6e91-eb8a-467c-bc35-2a02056a582b") : object "kube-system"/"coredns" not registered
	Feb 14 21:42:24 test-preload-497787 kubelet[1128]: E0214 21:42:24.818335    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 14 21:42:24 test-preload-497787 kubelet[1128]: E0214 21:42:24.818412    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume podName:590a6e91-eb8a-467c-bc35-2a02056a582b nodeName:}" failed. No retries permitted until 2025-02-14 21:42:26.818389495 +0000 UTC m=+9.830709933 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume") pod "coredns-6d4b75cb6d-wgdqk" (UID: "590a6e91-eb8a-467c-bc35-2a02056a582b") : object "kube-system"/"coredns" not registered
	Feb 14 21:42:25 test-preload-497787 kubelet[1128]: E0214 21:42:25.235563    1128 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-wgdqk" podUID=590a6e91-eb8a-467c-bc35-2a02056a582b
	Feb 14 21:42:26 test-preload-497787 kubelet[1128]: E0214 21:42:26.838023    1128 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 14 21:42:26 test-preload-497787 kubelet[1128]: E0214 21:42:26.838164    1128 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume podName:590a6e91-eb8a-467c-bc35-2a02056a582b nodeName:}" failed. No retries permitted until 2025-02-14 21:42:30.838138973 +0000 UTC m=+13.850459424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/590a6e91-eb8a-467c-bc35-2a02056a582b-config-volume") pod "coredns-6d4b75cb6d-wgdqk" (UID: "590a6e91-eb8a-467c-bc35-2a02056a582b") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [184da0c26c9612c30ee38cce46fc89e8c85b2c0eb332de43f32a590aa0766926] <==
	I0214 21:42:24.499056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-497787 -n test-preload-497787
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-497787 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-497787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-497787
--- FAIL: TestPreload (279.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (433.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.448262174s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-041692] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-041692" primary control-plane node in "kubernetes-upgrade-041692" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:44:29.833422  283281 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:44:29.833553  283281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:44:29.833566  283281 out.go:358] Setting ErrFile to fd 2...
	I0214 21:44:29.833573  283281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:44:29.833786  283281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:44:29.834337  283281 out.go:352] Setting JSON to false
	I0214 21:44:29.835267  283281 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8814,"bootTime":1739560656,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:44:29.835356  283281 start.go:140] virtualization: kvm guest
	I0214 21:44:29.837108  283281 out.go:177] * [kubernetes-upgrade-041692] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:44:29.838537  283281 notify.go:220] Checking for updates...
	I0214 21:44:29.838592  283281 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:44:29.839873  283281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:44:29.841394  283281 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:44:29.842529  283281 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:44:29.843704  283281 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:44:29.845047  283281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:44:29.846551  283281 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:44:29.880201  283281 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:44:29.881307  283281 start.go:304] selected driver: kvm2
	I0214 21:44:29.881321  283281 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:44:29.881333  283281 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:44:29.882489  283281 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:44:29.901359  283281 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:44:29.920570  283281 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:44:29.920604  283281 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:44:29.920968  283281 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:44:29.920996  283281 cni.go:84] Creating CNI manager for ""
	I0214 21:44:29.921070  283281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:44:29.921089  283281 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 21:44:29.921149  283281 start.go:347] cluster config:
	{Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:44:29.921367  283281 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:44:29.922877  283281 out.go:177] * Starting "kubernetes-upgrade-041692" primary control-plane node in "kubernetes-upgrade-041692" cluster
	I0214 21:44:29.924002  283281 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:44:29.924032  283281 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0214 21:44:29.924043  283281 cache.go:56] Caching tarball of preloaded images
	I0214 21:44:29.924136  283281 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 21:44:29.924217  283281 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0214 21:44:29.924654  283281 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/config.json ...
	I0214 21:44:29.924709  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/config.json: {Name:mkf6125ce9ddde3ea3495c6ea89688ad1a1bb6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:44:29.924948  283281 start.go:360] acquireMachinesLock for kubernetes-upgrade-041692: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:44:55.758869  283281 start.go:364] duration metric: took 25.833873879s to acquireMachinesLock for "kubernetes-upgrade-041692"
	I0214 21:44:55.758937  283281 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20
.0 ClusterName:kubernetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:44:55.759051  283281 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 21:44:55.760634  283281 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0214 21:44:55.760825  283281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:44:55.760880  283281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:44:55.777771  283281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0214 21:44:55.778164  283281 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:44:55.778723  283281 main.go:141] libmachine: Using API Version  1
	I0214 21:44:55.778746  283281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:44:55.779145  283281 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:44:55.779375  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:44:55.779527  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:44:55.779691  283281 start.go:159] libmachine.API.Create for "kubernetes-upgrade-041692" (driver="kvm2")
	I0214 21:44:55.779724  283281 client.go:168] LocalClient.Create starting
	I0214 21:44:55.779749  283281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 21:44:55.779785  283281 main.go:141] libmachine: Decoding PEM data...
	I0214 21:44:55.779801  283281 main.go:141] libmachine: Parsing certificate...
	I0214 21:44:55.779856  283281 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 21:44:55.779874  283281 main.go:141] libmachine: Decoding PEM data...
	I0214 21:44:55.779887  283281 main.go:141] libmachine: Parsing certificate...
	I0214 21:44:55.779903  283281 main.go:141] libmachine: Running pre-create checks...
	I0214 21:44:55.779925  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .PreCreateCheck
	I0214 21:44:55.780348  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetConfigRaw
	I0214 21:44:55.780758  283281 main.go:141] libmachine: Creating machine...
	I0214 21:44:55.780773  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .Create
	I0214 21:44:55.780906  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) creating KVM machine...
	I0214 21:44:55.780929  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) creating network...
	I0214 21:44:55.782025  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found existing default KVM network
	I0214 21:44:55.783072  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:55.782905  283586 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5a:dc:e4} reservation:<nil>}
	I0214 21:44:55.783911  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:55.783828  283586 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002423e0}
	I0214 21:44:55.783953  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | created network xml: 
	I0214 21:44:55.783969  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | <network>
	I0214 21:44:55.783981  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   <name>mk-kubernetes-upgrade-041692</name>
	I0214 21:44:55.783991  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   <dns enable='no'/>
	I0214 21:44:55.784002  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   
	I0214 21:44:55.784031  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0214 21:44:55.784075  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |     <dhcp>
	I0214 21:44:55.784093  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0214 21:44:55.784104  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |     </dhcp>
	I0214 21:44:55.784114  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   </ip>
	I0214 21:44:55.784122  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG |   
	I0214 21:44:55.784130  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | </network>
	I0214 21:44:55.784147  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | 
	I0214 21:44:55.788703  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | trying to create private KVM network mk-kubernetes-upgrade-041692 192.168.50.0/24...
	I0214 21:44:55.855361  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | private KVM network mk-kubernetes-upgrade-041692 192.168.50.0/24 created
	I0214 21:44:55.855412  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:55.855331  283586 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:44:55.855460  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692 ...
	I0214 21:44:55.855486  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 21:44:55.855529  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 21:44:56.155363  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:56.155219  283586 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa...
	I0214 21:44:56.270084  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:56.269947  283586 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/kubernetes-upgrade-041692.rawdisk...
	I0214 21:44:56.270117  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | Writing magic tar header
	I0214 21:44:56.270136  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | Writing SSH key tar header
	I0214 21:44:56.270148  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:56.270065  283586 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692 ...
	I0214 21:44:56.270207  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692
	I0214 21:44:56.270241  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 21:44:56.270271  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692 (perms=drwx------)
	I0214 21:44:56.270286  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:44:56.270301  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 21:44:56.270317  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 21:44:56.270328  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 21:44:56.270364  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home/jenkins
	I0214 21:44:56.270378  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 21:44:56.270386  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | checking permissions on dir: /home
	I0214 21:44:56.270398  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | skipping /home - not owner
	I0214 21:44:56.270412  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 21:44:56.270422  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 21:44:56.270434  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 21:44:56.270443  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) creating domain...
	I0214 21:44:56.271639  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) define libvirt domain using xml: 
	I0214 21:44:56.271684  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) <domain type='kvm'>
	I0214 21:44:56.271698  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <name>kubernetes-upgrade-041692</name>
	I0214 21:44:56.271711  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <memory unit='MiB'>2200</memory>
	I0214 21:44:56.271739  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <vcpu>2</vcpu>
	I0214 21:44:56.271751  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <features>
	I0214 21:44:56.271760  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <acpi/>
	I0214 21:44:56.271776  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <apic/>
	I0214 21:44:56.271789  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <pae/>
	I0214 21:44:56.271801  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     
	I0214 21:44:56.271814  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   </features>
	I0214 21:44:56.271826  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <cpu mode='host-passthrough'>
	I0214 21:44:56.271835  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   
	I0214 21:44:56.271848  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   </cpu>
	I0214 21:44:56.271857  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <os>
	I0214 21:44:56.271869  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <type>hvm</type>
	I0214 21:44:56.271882  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <boot dev='cdrom'/>
	I0214 21:44:56.271893  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <boot dev='hd'/>
	I0214 21:44:56.271906  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <bootmenu enable='no'/>
	I0214 21:44:56.271917  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   </os>
	I0214 21:44:56.271939  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   <devices>
	I0214 21:44:56.271953  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <disk type='file' device='cdrom'>
	I0214 21:44:56.271970  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/boot2docker.iso'/>
	I0214 21:44:56.271983  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <target dev='hdc' bus='scsi'/>
	I0214 21:44:56.271997  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <readonly/>
	I0214 21:44:56.272011  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </disk>
	I0214 21:44:56.272026  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <disk type='file' device='disk'>
	I0214 21:44:56.272036  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 21:44:56.272056  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/kubernetes-upgrade-041692.rawdisk'/>
	I0214 21:44:56.272076  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <target dev='hda' bus='virtio'/>
	I0214 21:44:56.272088  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </disk>
	I0214 21:44:56.272101  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <interface type='network'>
	I0214 21:44:56.272114  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <source network='mk-kubernetes-upgrade-041692'/>
	I0214 21:44:56.272125  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <model type='virtio'/>
	I0214 21:44:56.272135  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </interface>
	I0214 21:44:56.272147  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <interface type='network'>
	I0214 21:44:56.272160  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <source network='default'/>
	I0214 21:44:56.272171  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <model type='virtio'/>
	I0214 21:44:56.272185  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </interface>
	I0214 21:44:56.272197  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <serial type='pty'>
	I0214 21:44:56.272211  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <target port='0'/>
	I0214 21:44:56.272223  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </serial>
	I0214 21:44:56.272233  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <console type='pty'>
	I0214 21:44:56.272244  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <target type='serial' port='0'/>
	I0214 21:44:56.272253  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </console>
	I0214 21:44:56.272264  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     <rng model='virtio'>
	I0214 21:44:56.272278  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)       <backend model='random'>/dev/random</backend>
	I0214 21:44:56.272289  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     </rng>
	I0214 21:44:56.272299  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     
	I0214 21:44:56.272310  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)     
	I0214 21:44:56.272322  283281 main.go:141] libmachine: (kubernetes-upgrade-041692)   </devices>
	I0214 21:44:56.272331  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) </domain>
	I0214 21:44:56.272347  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) 
	I0214 21:44:56.278805  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:d0:ea:be in network default
	I0214 21:44:56.279394  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:56.279414  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) starting domain...
	I0214 21:44:56.279428  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) ensuring networks are active...
	I0214 21:44:56.280123  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Ensuring network default is active
	I0214 21:44:56.280495  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Ensuring network mk-kubernetes-upgrade-041692 is active
	I0214 21:44:56.280973  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) getting domain XML...
	I0214 21:44:56.281637  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) creating domain...
	I0214 21:44:56.623033  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) waiting for IP...
	I0214 21:44:56.623753  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:56.624153  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:56.624220  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:56.624150  283586 retry.go:31] will retry after 312.147848ms: waiting for domain to come up
	I0214 21:44:56.937748  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:56.938358  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:56.938391  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:56.938315  283586 retry.go:31] will retry after 277.300229ms: waiting for domain to come up
	I0214 21:44:57.216646  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:57.217087  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:57.217120  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:57.217063  283586 retry.go:31] will retry after 481.509821ms: waiting for domain to come up
	I0214 21:44:57.700641  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:57.701077  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:57.701114  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:57.701045  283586 retry.go:31] will retry after 579.07376ms: waiting for domain to come up
	I0214 21:44:58.282064  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:58.282663  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:58.282697  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:58.282596  283586 retry.go:31] will retry after 614.898872ms: waiting for domain to come up
	I0214 21:44:58.898874  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:58.899345  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:58.899387  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:58.899287  283586 retry.go:31] will retry after 899.598687ms: waiting for domain to come up
	I0214 21:44:59.800425  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:44:59.800943  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:44:59.800982  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:44:59.800909  283586 retry.go:31] will retry after 1.129734067s: waiting for domain to come up
	I0214 21:45:00.933733  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:00.934132  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:00.934163  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:00.934105  283586 retry.go:31] will retry after 1.045973369s: waiting for domain to come up
	I0214 21:45:01.981185  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:01.981590  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:01.981689  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:01.981588  283586 retry.go:31] will retry after 1.493108672s: waiting for domain to come up
	I0214 21:45:03.477452  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:03.477869  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:03.477896  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:03.477849  283586 retry.go:31] will retry after 1.412785151s: waiting for domain to come up
	I0214 21:45:04.892453  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:04.893037  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:04.893071  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:04.893005  283586 retry.go:31] will retry after 2.169973669s: waiting for domain to come up
	I0214 21:45:07.065107  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:07.065632  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:07.065730  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:07.065629  283586 retry.go:31] will retry after 2.807885808s: waiting for domain to come up
	I0214 21:45:09.876518  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:09.876946  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:09.876977  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:09.876905  283586 retry.go:31] will retry after 3.130799752s: waiting for domain to come up
	I0214 21:45:13.008839  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:13.009329  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find current IP address of domain kubernetes-upgrade-041692 in network mk-kubernetes-upgrade-041692
	I0214 21:45:13.009388  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | I0214 21:45:13.009310  283586 retry.go:31] will retry after 5.527232021s: waiting for domain to come up
	I0214 21:45:18.539742  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.540229  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) found domain IP: 192.168.50.64
	I0214 21:45:18.540265  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has current primary IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.540276  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) reserving static IP address...
	I0214 21:45:18.540583  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-041692", mac: "52:54:00:a1:95:40", ip: "192.168.50.64"} in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.611954  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | Getting to WaitForSSH function...
	I0214 21:45:18.611992  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) reserved static IP address 192.168.50.64 for domain kubernetes-upgrade-041692
	I0214 21:45:18.612009  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) waiting for SSH...
	I0214 21:45:18.614333  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.614658  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:18.614689  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.614834  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | Using SSH client type: external
	I0214 21:45:18.614859  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa (-rw-------)
	I0214 21:45:18.614891  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.64 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:45:18.614923  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | About to run SSH command:
	I0214 21:45:18.614939  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | exit 0
	I0214 21:45:18.738433  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | SSH cmd err, output: <nil>: 
	I0214 21:45:18.738703  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) KVM machine creation complete
	I0214 21:45:18.739029  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetConfigRaw
	I0214 21:45:18.739554  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:18.739760  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:18.739892  283281 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 21:45:18.739908  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetState
	I0214 21:45:18.741120  283281 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 21:45:18.741134  283281 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 21:45:18.741139  283281 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 21:45:18.741146  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:18.743378  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.743667  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:18.743693  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.743773  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:18.743965  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.744099  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.744223  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:18.744370  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:18.744574  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:18.744632  283281 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 21:45:18.845455  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:45:18.845473  283281 main.go:141] libmachine: Detecting the provisioner...
	I0214 21:45:18.845482  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:18.847807  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.848144  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:18.848191  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.848277  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:18.848438  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.848579  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.848716  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:18.848876  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:18.849030  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:18.849040  283281 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 21:45:18.950755  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 21:45:18.950821  283281 main.go:141] libmachine: found compatible host: buildroot
	I0214 21:45:18.950835  283281 main.go:141] libmachine: Provisioning with buildroot...
	I0214 21:45:18.950846  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:45:18.951025  283281 buildroot.go:166] provisioning hostname "kubernetes-upgrade-041692"
	I0214 21:45:18.951046  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:45:18.951200  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:18.953329  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.953644  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:18.953671  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:18.953788  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:18.953964  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.954124  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:18.954248  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:18.954401  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:18.954608  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:18.954645  283281 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-041692 && echo "kubernetes-upgrade-041692" | sudo tee /etc/hostname
	I0214 21:45:19.068159  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-041692
	
	I0214 21:45:19.068176  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.070237  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.070546  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.070575  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.070698  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.070892  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.071052  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.071198  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.071385  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:19.071531  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:19.071552  283281 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-041692' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-041692/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-041692' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:45:19.182493  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:45:19.182526  283281 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:45:19.182558  283281 buildroot.go:174] setting up certificates
	I0214 21:45:19.182568  283281 provision.go:84] configureAuth start
	I0214 21:45:19.182578  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:45:19.182857  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:45:19.185147  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.185459  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.185492  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.185597  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.187732  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.188104  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.188137  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.188266  283281 provision.go:143] copyHostCerts
	I0214 21:45:19.188331  283281 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:45:19.188355  283281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:45:19.188405  283281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:45:19.188506  283281 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:45:19.188514  283281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:45:19.188534  283281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:45:19.188598  283281 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:45:19.188605  283281 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:45:19.188623  283281 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:45:19.188690  283281 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-041692 san=[127.0.0.1 192.168.50.64 kubernetes-upgrade-041692 localhost minikube]
	I0214 21:45:19.333941  283281 provision.go:177] copyRemoteCerts
	I0214 21:45:19.334012  283281 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:45:19.334049  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.336643  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.336944  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.336971  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.337147  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.337329  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.337488  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.337621  283281 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:45:19.420720  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:45:19.444976  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0214 21:45:19.467643  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 21:45:19.490265  283281 provision.go:87] duration metric: took 307.686987ms to configureAuth
	I0214 21:45:19.490288  283281 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:45:19.490477  283281 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:45:19.490549  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.492927  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.493234  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.493261  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.493417  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.493595  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.493752  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.493892  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.494006  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:19.494178  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:19.494194  283281 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:45:19.703507  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:45:19.703535  283281 main.go:141] libmachine: Checking connection to Docker...
	I0214 21:45:19.703545  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetURL
	I0214 21:45:19.704694  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | using libvirt version 6000000
	I0214 21:45:19.706745  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.707092  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.707132  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.707235  283281 main.go:141] libmachine: Docker is up and running!
	I0214 21:45:19.707249  283281 main.go:141] libmachine: Reticulating splines...
	I0214 21:45:19.707259  283281 client.go:171] duration metric: took 23.92752537s to LocalClient.Create
	I0214 21:45:19.707296  283281 start.go:167] duration metric: took 23.927595448s to libmachine.API.Create "kubernetes-upgrade-041692"
	I0214 21:45:19.707307  283281 start.go:293] postStartSetup for "kubernetes-upgrade-041692" (driver="kvm2")
	I0214 21:45:19.707315  283281 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:45:19.707346  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:19.707600  283281 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:45:19.707628  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.709772  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.710135  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.710173  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.710333  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.710500  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.710668  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.710823  283281 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:45:19.793410  283281 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:45:19.797821  283281 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:45:19.797845  283281 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:45:19.797911  283281 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:45:19.797993  283281 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:45:19.798085  283281 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:45:19.808707  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:45:19.832306  283281 start.go:296] duration metric: took 124.98936ms for postStartSetup
	I0214 21:45:19.832341  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetConfigRaw
	I0214 21:45:19.832804  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:45:19.835435  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.835841  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.835873  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.836128  283281 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/config.json ...
	I0214 21:45:19.836330  283281 start.go:128] duration metric: took 24.077259312s to createHost
	I0214 21:45:19.836355  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.838843  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.839157  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.839190  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.839316  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.839475  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.839610  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.839745  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.839868  283281 main.go:141] libmachine: Using SSH client type: native
	I0214 21:45:19.840044  283281 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:45:19.840063  283281 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:45:19.942662  283281 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569519.917999942
	
	I0214 21:45:19.942679  283281 fix.go:216] guest clock: 1739569519.917999942
	I0214 21:45:19.942686  283281 fix.go:229] Guest: 2025-02-14 21:45:19.917999942 +0000 UTC Remote: 2025-02-14 21:45:19.83634497 +0000 UTC m=+50.048418382 (delta=81.654972ms)
	I0214 21:45:19.942702  283281 fix.go:200] guest clock delta is within tolerance: 81.654972ms
	I0214 21:45:19.942706  283281 start.go:83] releasing machines lock for "kubernetes-upgrade-041692", held for 24.183807921s
	I0214 21:45:19.942730  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:19.942926  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:45:19.945577  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.945866  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.945905  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.946109  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:19.946562  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:19.946763  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:45:19.946878  283281 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:45:19.946920  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.946978  283281 ssh_runner.go:195] Run: cat /version.json
	I0214 21:45:19.947005  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:45:19.949477  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.949639  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.949816  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.949844  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.949976  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.949976  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:19.950016  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:19.950105  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:45:19.950160  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.950249  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:45:19.950325  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.950414  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:45:19.950463  283281 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:45:19.950527  283281 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:45:20.031705  283281 ssh_runner.go:195] Run: systemctl --version
	I0214 21:45:20.058403  283281 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:45:20.217502  283281 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:45:20.225043  283281 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:45:20.225134  283281 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:45:20.249049  283281 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 21:45:20.249077  283281 start.go:495] detecting cgroup driver to use...
	I0214 21:45:20.249147  283281 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:45:20.272053  283281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:45:20.286239  283281 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:45:20.286298  283281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:45:20.299836  283281 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:45:20.313501  283281 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:45:20.439834  283281 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:45:20.607138  283281 docker.go:233] disabling docker service ...
	I0214 21:45:20.607216  283281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:45:20.621344  283281 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:45:20.633877  283281 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:45:20.769745  283281 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:45:20.895863  283281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:45:20.911392  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:45:20.931155  283281 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0214 21:45:20.931219  283281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:45:20.941963  283281 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:45:20.942007  283281 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:45:20.953054  283281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:45:20.963756  283281 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:45:20.975025  283281 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:45:20.985674  283281 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:45:20.995734  283281 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 21:45:20.995779  283281 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 21:45:21.009553  283281 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:45:21.020567  283281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:45:21.137001  283281 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:45:21.231844  283281 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:45:21.231924  283281 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:45:21.236521  283281 start.go:563] Will wait 60s for crictl version
	I0214 21:45:21.236586  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:21.240695  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:45:21.279708  283281 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:45:21.279818  283281 ssh_runner.go:195] Run: crio --version
	I0214 21:45:21.308037  283281 ssh_runner.go:195] Run: crio --version
	I0214 21:45:21.335938  283281 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0214 21:45:21.337125  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:45:21.340855  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:21.341362  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:45:10 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:45:21.341392  283281 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:45:21.341600  283281 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0214 21:45:21.345881  283281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:45:21.359725  283281 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kube
rnetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:45:21.359875  283281 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:45:21.359935  283281 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:45:21.392796  283281 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:45:21.392868  283281 ssh_runner.go:195] Run: which lz4
	I0214 21:45:21.396947  283281 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 21:45:21.401496  283281 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 21:45:21.401535  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0214 21:45:23.054927  283281 crio.go:462] duration metric: took 1.65800444s to copy over tarball
	I0214 21:45:23.055024  283281 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 21:45:25.604008  283281 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.548944706s)
	I0214 21:45:25.604054  283281 crio.go:469] duration metric: took 2.549092148s to extract the tarball
	I0214 21:45:25.604066  283281 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 21:45:25.646875  283281 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:45:25.699765  283281 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:45:25.699792  283281 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 21:45:25.699848  283281 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:45:25.699860  283281 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:25.699887  283281 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0214 21:45:25.699908  283281 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0214 21:45:25.700302  283281 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:25.699939  283281 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:25.700790  283281 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:25.700891  283281 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:25.703266  283281 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:25.703318  283281 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 21:45:25.703339  283281 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:25.703267  283281 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:25.703433  283281 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:45:25.703452  283281 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:25.703661  283281 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:25.703877  283281 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0214 21:45:25.862924  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:25.867582  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:25.869475  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:25.880131  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:25.881538  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:25.885299  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0214 21:45:25.903993  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0214 21:45:25.970859  283281 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0214 21:45:25.970886  283281 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0214 21:45:25.970914  283281 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:25.970925  283281 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:25.970964  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:25.970967  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:25.977675  283281 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0214 21:45:25.977715  283281 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:25.977751  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:26.032379  283281 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0214 21:45:26.032431  283281 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:26.032484  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:26.045768  283281 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0214 21:45:26.045817  283281 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:26.045854  283281 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0214 21:45:26.045866  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:26.045869  283281 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0214 21:45:26.045889  283281 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0214 21:45:26.045901  283281 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 21:45:26.045922  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:26.045923  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:26.045936  283281 ssh_runner.go:195] Run: which crictl
	I0214 21:45:26.045962  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:26.046027  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:26.046043  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:26.124751  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:26.146049  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:26.146987  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:26.147039  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:26.147072  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:45:26.147087  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:45:26.147045  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:26.223669  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:45:26.272722  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:45:26.280640  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:45:26.321143  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:45:26.321276  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:45:26.321302  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:26.321314  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:45:26.363201  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0214 21:45:26.407977  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0214 21:45:26.408052  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:45:26.453065  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:45:26.453123  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0214 21:45:26.453074  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0214 21:45:26.457343  283281 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:45:26.480256  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0214 21:45:26.515399  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0214 21:45:26.515457  283281 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0214 21:45:26.644634  283281 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:45:26.793544  283281 cache_images.go:92] duration metric: took 1.093730268s to LoadCachedImages
	W0214 21:45:26.793650  283281 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0214 21:45:26.793669  283281 kubeadm.go:926] updating node { 192.168.50.64 8443 v1.20.0 crio true true} ...
	I0214 21:45:26.793783  283281 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-041692 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:45:26.793862  283281 ssh_runner.go:195] Run: crio config
	I0214 21:45:26.851573  283281 cni.go:84] Creating CNI manager for ""
	I0214 21:45:26.851597  283281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:45:26.851608  283281 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:45:26.851629  283281 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.64 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-041692 NodeName:kubernetes-upgrade-041692 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 21:45:26.851796  283281 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-041692"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:45:26.851880  283281 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0214 21:45:26.862653  283281 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:45:26.862713  283281 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:45:26.872672  283281 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0214 21:45:26.892860  283281 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:45:26.912299  283281 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0214 21:45:26.933225  283281 ssh_runner.go:195] Run: grep 192.168.50.64	control-plane.minikube.internal$ /etc/hosts
	I0214 21:45:26.937285  283281 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.64	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:45:26.950374  283281 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:45:27.085027  283281 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:45:27.103015  283281 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692 for IP: 192.168.50.64
	I0214 21:45:27.103088  283281 certs.go:194] generating shared ca certs ...
	I0214 21:45:27.103118  283281 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.103291  283281 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:45:27.103347  283281 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:45:27.103367  283281 certs.go:256] generating profile certs ...
	I0214 21:45:27.103437  283281 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.key
	I0214 21:45:27.103454  283281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.crt with IP's: []
	I0214 21:45:27.280761  283281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.crt ...
	I0214 21:45:27.280794  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.crt: {Name:mkc6bc8a238dd738d05d1ebcf077f3cc9ec89daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.280995  283281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.key ...
	I0214 21:45:27.281013  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.key: {Name:mk407af47ea7fdcb08b9b9aed7f8088c83c2621e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.281141  283281 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key.6fdfc3ce
	I0214 21:45:27.281162  283281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt.6fdfc3ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.64]
	I0214 21:45:27.484969  283281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt.6fdfc3ce ...
	I0214 21:45:27.484998  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt.6fdfc3ce: {Name:mkb681898e24d5004d175a24c97058c342330463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.485170  283281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key.6fdfc3ce ...
	I0214 21:45:27.485187  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key.6fdfc3ce: {Name:mk1cc72021af33bad57a21f8b6601dfd2d70bc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.485291  283281 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt.6fdfc3ce -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt
	I0214 21:45:27.485381  283281 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key.6fdfc3ce -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key
	I0214 21:45:27.485442  283281 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key
	I0214 21:45:27.485461  283281 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.crt with IP's: []
	I0214 21:45:27.805889  283281 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.crt ...
	I0214 21:45:27.805920  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.crt: {Name:mk727bfa3765182c39002d20393f105555c7b59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.806112  283281 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key ...
	I0214 21:45:27.806130  283281 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key: {Name:mk02c6a8513089ce6de746fed17a9b7c48c8fe7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:45:27.806335  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:45:27.806375  283281 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:45:27.806385  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:45:27.806406  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:45:27.806429  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:45:27.806450  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:45:27.806485  283281 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:45:27.807085  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:45:27.836970  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:45:27.863598  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:45:27.888546  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:45:27.916273  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0214 21:45:27.945775  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 21:45:27.981577  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:45:28.017547  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 21:45:28.052842  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:45:28.081554  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:45:28.105385  283281 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:45:28.133191  283281 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:45:28.150828  283281 ssh_runner.go:195] Run: openssl version
	I0214 21:45:28.157751  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:45:28.168463  283281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:45:28.172873  283281 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:45:28.172935  283281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:45:28.178789  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:45:28.189737  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:45:28.200077  283281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:45:28.204839  283281 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:45:28.204889  283281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:45:28.211466  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:45:28.222388  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:45:28.232720  283281 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:45:28.237223  283281 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:45:28.237262  283281 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:45:28.243569  283281 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:45:28.255401  283281 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:45:28.259719  283281 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 21:45:28.259781  283281 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kuberne
tes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:45:28.259886  283281 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:45:28.259945  283281 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:45:28.297657  283281 cri.go:89] found id: ""
	I0214 21:45:28.297733  283281 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:45:28.308268  283281 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:45:28.317762  283281 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:45:28.327819  283281 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:45:28.327841  283281 kubeadm.go:157] found existing configuration files:
	
	I0214 21:45:28.327896  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:45:28.337861  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:45:28.337920  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:45:28.347818  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:45:28.357235  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:45:28.357298  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:45:28.366655  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:45:28.375424  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:45:28.375475  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:45:28.384589  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:45:28.396320  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:45:28.396370  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:45:28.408641  283281 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 21:45:28.549233  283281 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 21:45:28.549497  283281 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:45:28.732869  283281 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:45:28.733105  283281 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:45:28.733285  283281 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 21:45:28.952384  283281 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:45:29.104145  283281 out.go:235]   - Generating certificates and keys ...
	I0214 21:45:29.104269  283281 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:45:29.104369  283281 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:45:29.232426  283281 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 21:45:29.468339  283281 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 21:45:29.871013  283281 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 21:45:30.255178  283281 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 21:45:30.362719  283281 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 21:45:30.362918  283281 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	I0214 21:45:30.562264  283281 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 21:45:30.562538  283281 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	I0214 21:45:30.801259  283281 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 21:45:30.852742  283281 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 21:45:31.297734  283281 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 21:45:31.297987  283281 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:45:31.645460  283281 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:45:31.745269  283281 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:45:31.827254  283281 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:45:32.030197  283281 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:45:32.044980  283281 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:45:32.046422  283281 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:45:32.046473  283281 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:45:32.163778  283281 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:45:32.165223  283281 out.go:235]   - Booting up control plane ...
	I0214 21:45:32.165349  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:45:32.170391  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:45:32.171267  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:45:32.172439  283281 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:45:32.176326  283281 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 21:46:12.170423  283281 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 21:46:12.171030  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:46:12.171301  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:46:17.171586  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:46:17.171910  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:46:27.170906  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:46:27.171131  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:46:47.170563  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:46:47.170892  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:47:27.171935  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:47:27.172176  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:47:27.172219  283281 kubeadm.go:310] 
	I0214 21:47:27.172287  283281 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 21:47:27.172348  283281 kubeadm.go:310] 		timed out waiting for the condition
	I0214 21:47:27.172363  283281 kubeadm.go:310] 
	I0214 21:47:27.172408  283281 kubeadm.go:310] 	This error is likely caused by:
	I0214 21:47:27.172457  283281 kubeadm.go:310] 		- The kubelet is not running
	I0214 21:47:27.172597  283281 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 21:47:27.172606  283281 kubeadm.go:310] 
	I0214 21:47:27.172751  283281 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 21:47:27.172809  283281 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 21:47:27.172866  283281 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 21:47:27.172878  283281 kubeadm.go:310] 
	I0214 21:47:27.173035  283281 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 21:47:27.173132  283281 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 21:47:27.173142  283281 kubeadm.go:310] 
	I0214 21:47:27.173266  283281 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 21:47:27.173360  283281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 21:47:27.173431  283281 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 21:47:27.173538  283281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 21:47:27.173561  283281 kubeadm.go:310] 
	I0214 21:47:27.174404  283281 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:47:27.174536  283281 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 21:47:27.174660  283281 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 21:47:27.174819  283281 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-041692 localhost] and IPs [192.168.50.64 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 21:47:27.174868  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 21:47:28.221815  283281 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.046908212s)
	I0214 21:47:28.221911  283281 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:47:28.238144  283281 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:47:28.247523  283281 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:47:28.247553  283281 kubeadm.go:157] found existing configuration files:
	
	I0214 21:47:28.247610  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:47:28.256309  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:47:28.256373  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:47:28.265246  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:47:28.274076  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:47:28.274140  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:47:28.283295  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:47:28.291843  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:47:28.291890  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:47:28.300670  283281 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:47:28.309029  283281 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:47:28.309071  283281 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:47:28.317712  283281 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 21:47:28.553416  283281 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:49:24.658431  283281 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 21:49:24.658503  283281 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 21:49:24.660113  283281 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 21:49:24.660170  283281 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:49:24.660250  283281 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:49:24.660389  283281 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:49:24.660506  283281 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 21:49:24.660593  283281 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:49:24.661931  283281 out.go:235]   - Generating certificates and keys ...
	I0214 21:49:24.662014  283281 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:49:24.662089  283281 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:49:24.662191  283281 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 21:49:24.662241  283281 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 21:49:24.662296  283281 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 21:49:24.662338  283281 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 21:49:24.662404  283281 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 21:49:24.662454  283281 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 21:49:24.662517  283281 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 21:49:24.662578  283281 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 21:49:24.662614  283281 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 21:49:24.662681  283281 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:49:24.662735  283281 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:49:24.662820  283281 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:49:24.662908  283281 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:49:24.662987  283281 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:49:24.663108  283281 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:49:24.663178  283281 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:49:24.663210  283281 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:49:24.663266  283281 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:49:24.664479  283281 out.go:235]   - Booting up control plane ...
	I0214 21:49:24.664546  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:49:24.664625  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:49:24.664701  283281 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:49:24.664771  283281 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:49:24.664896  283281 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 21:49:24.664940  283281 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 21:49:24.664993  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:49:24.665150  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:49:24.665203  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:49:24.665371  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:49:24.665437  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:49:24.665639  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:49:24.665697  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:49:24.665853  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:49:24.665925  283281 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:49:24.666077  283281 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:49:24.666089  283281 kubeadm.go:310] 
	I0214 21:49:24.666120  283281 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 21:49:24.666182  283281 kubeadm.go:310] 		timed out waiting for the condition
	I0214 21:49:24.666203  283281 kubeadm.go:310] 
	I0214 21:49:24.666261  283281 kubeadm.go:310] 	This error is likely caused by:
	I0214 21:49:24.666312  283281 kubeadm.go:310] 		- The kubelet is not running
	I0214 21:49:24.666408  283281 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 21:49:24.666419  283281 kubeadm.go:310] 
	I0214 21:49:24.666505  283281 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 21:49:24.666533  283281 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 21:49:24.666560  283281 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 21:49:24.666566  283281 kubeadm.go:310] 
	I0214 21:49:24.666668  283281 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 21:49:24.666733  283281 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 21:49:24.666740  283281 kubeadm.go:310] 
	I0214 21:49:24.666823  283281 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 21:49:24.666900  283281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 21:49:24.666994  283281 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 21:49:24.667095  283281 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 21:49:24.667121  283281 kubeadm.go:310] 
	I0214 21:49:24.667177  283281 kubeadm.go:394] duration metric: took 3m56.40740252s to StartCluster
	I0214 21:49:24.667250  283281 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:49:24.667316  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:49:24.712160  283281 cri.go:89] found id: ""
	I0214 21:49:24.712179  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.712188  283281 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:49:24.712196  283281 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:49:24.712249  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:49:24.746470  283281 cri.go:89] found id: ""
	I0214 21:49:24.746488  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.746494  283281 logs.go:284] No container was found matching "etcd"
	I0214 21:49:24.746500  283281 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:49:24.746539  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:49:24.778881  283281 cri.go:89] found id: ""
	I0214 21:49:24.778903  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.778909  283281 logs.go:284] No container was found matching "coredns"
	I0214 21:49:24.778914  283281 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:49:24.778950  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:49:24.810817  283281 cri.go:89] found id: ""
	I0214 21:49:24.810833  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.810840  283281 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:49:24.810845  283281 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:49:24.810895  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:49:24.842963  283281 cri.go:89] found id: ""
	I0214 21:49:24.842984  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.842990  283281 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:49:24.842995  283281 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:49:24.843037  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:49:24.876500  283281 cri.go:89] found id: ""
	I0214 21:49:24.876517  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.876523  283281 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:49:24.876529  283281 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:49:24.876566  283281 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:49:24.911542  283281 cri.go:89] found id: ""
	I0214 21:49:24.911561  283281 logs.go:282] 0 containers: []
	W0214 21:49:24.911567  283281 logs.go:284] No container was found matching "kindnet"
	I0214 21:49:24.911576  283281 logs.go:123] Gathering logs for kubelet ...
	I0214 21:49:24.911590  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:49:24.962265  283281 logs.go:123] Gathering logs for dmesg ...
	I0214 21:49:24.962286  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:49:24.975070  283281 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:49:24.975092  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:49:25.085447  283281 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:49:25.085470  283281 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:49:25.085482  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:49:25.183503  283281 logs.go:123] Gathering logs for container status ...
	I0214 21:49:25.183528  283281 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 21:49:25.219878  283281 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 21:49:25.219933  283281 out.go:270] * 
	* 
	W0214 21:49:25.219991  283281 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 21:49:25.220010  283281 out.go:270] * 
	* 
	W0214 21:49:25.220885  283281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 21:49:25.223777  283281 out.go:201] 
	W0214 21:49:25.224891  283281 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 21:49:25.224939  283281 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 21:49:25.224971  283281 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 21:49:25.226167  283281 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-041692
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-041692: (1.432497116s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-041692 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-041692 status --format={{.Host}}: exit status 7 (67.325167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.548783529s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-041692 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.463246ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-041692] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-041692
	    minikube start -p kubernetes-upgrade-041692 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0416922 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-041692 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-041692 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.823640999s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-14 21:51:40.288925108 +0000 UTC m=+4045.993494043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-041692 -n kubernetes-upgrade-041692
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-041692 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-041692 logs -n 25: (1.6066465s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-266997 sudo find            | cilium-266997             | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo crio            | cilium-266997             | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-266997                      | cilium-266997             | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:48 UTC |
	| start   | -p force-systemd-env-054462           | force-systemd-env-054462  | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:49 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-865564                       | pause-865564              | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:48 UTC |
	| start   | -p force-systemd-flag-203280          | force-systemd-flag-203280 | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:49 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-201553 sudo           | NoKubernetes-201553       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-201553                | NoKubernetes-201553       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:48 UTC |
	| start   | -p NoKubernetes-201553                | NoKubernetes-201553       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:49 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-054462           | force-systemd-env-054462  | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	| start   | -p cert-expiration-191481             | cert-expiration-191481    | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:50 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-041692          | kubernetes-upgrade-041692 | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	| start   | -p kubernetes-upgrade-041692          | kubernetes-upgrade-041692 | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-203280 ssh cat     | force-systemd-flag-203280 | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-203280          | force-systemd-flag-203280 | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	| start   | -p cert-options-733237                | cert-options-733237       | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:51 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-201553 sudo           | NoKubernetes-201553       | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-201553                | NoKubernetes-201553       | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC | 14 Feb 25 21:49 UTC |
	| start   | -p old-k8s-version-201745             | old-k8s-version-201745    | jenkins | v1.35.0 | 14 Feb 25 21:49 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-041692          | kubernetes-upgrade-041692 | jenkins | v1.35.0 | 14 Feb 25 21:50 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-041692          | kubernetes-upgrade-041692 | jenkins | v1.35.0 | 14 Feb 25 21:50 UTC | 14 Feb 25 21:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-733237 ssh               | cert-options-733237       | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:51 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-733237 -- sudo        | cert-options-733237       | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:51 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-733237                | cert-options-733237       | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC | 14 Feb 25 21:51 UTC |
	| start   | -p no-preload-926549                  | no-preload-926549         | jenkins | v1.35.0 | 14 Feb 25 21:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:51:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:51:15.852802  291072 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:51:15.853015  291072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:51:15.853026  291072 out.go:358] Setting ErrFile to fd 2...
	I0214 21:51:15.853032  291072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:51:15.853245  291072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:51:15.853858  291072 out.go:352] Setting JSON to false
	I0214 21:51:15.854871  291072 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9220,"bootTime":1739560656,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:51:15.854960  291072 start.go:140] virtualization: kvm guest
	I0214 21:51:15.856950  291072 out.go:177] * [no-preload-926549] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:51:15.858242  291072 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:51:15.858247  291072 notify.go:220] Checking for updates...
	I0214 21:51:15.860676  291072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:51:15.861791  291072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:51:15.862841  291072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:51:15.863940  291072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:51:15.865067  291072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:51:15.866436  291072 config.go:182] Loaded profile config "cert-expiration-191481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:51:15.866547  291072 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:51:15.866716  291072 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:51:15.866824  291072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:51:15.902228  291072 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:51:15.903300  291072 start.go:304] selected driver: kvm2
	I0214 21:51:15.903314  291072 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:51:15.903327  291072 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:51:15.904233  291072 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.904336  291072 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:51:15.919052  291072 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:51:15.919121  291072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:51:15.919378  291072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:51:15.919410  291072 cni.go:84] Creating CNI manager for ""
	I0214 21:51:15.919471  291072 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:51:15.919482  291072 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 21:51:15.919553  291072 start.go:347] cluster config:
	{Name:no-preload-926549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-926549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:51:15.919658  291072 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.921027  291072 out.go:177] * Starting "no-preload-926549" primary control-plane node in "no-preload-926549" cluster
	I0214 21:51:20.146828  290611 start.go:364] duration metric: took 30.537001247s to acquireMachinesLock for "kubernetes-upgrade-041692"
	I0214 21:51:20.146879  290611 start.go:96] Skipping create...Using existing machine configuration
	I0214 21:51:20.146886  290611 fix.go:54] fixHost starting: 
	I0214 21:51:20.147302  290611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:51:20.147366  290611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:51:20.164381  290611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0214 21:51:20.164854  290611 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:51:20.165595  290611 main.go:141] libmachine: Using API Version  1
	I0214 21:51:20.165640  290611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:51:20.165992  290611 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:51:20.166168  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:20.166313  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetState
	I0214 21:51:20.167876  290611 fix.go:112] recreateIfNeeded on kubernetes-upgrade-041692: state=Running err=<nil>
	W0214 21:51:20.167918  290611 fix.go:138] unexpected machine state, will restart: <nil>
	I0214 21:51:20.169791  290611 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-041692" VM ...
	I0214 21:51:15.640099  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.640675  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has current primary IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.640700  290030 main.go:141] libmachine: (old-k8s-version-201745) found domain IP: 192.168.72.19
	I0214 21:51:15.640713  290030 main.go:141] libmachine: (old-k8s-version-201745) reserving static IP address...
	I0214 21:51:15.640986  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-201745", mac: "52:54:00:6d:30:ba", ip: "192.168.72.19"} in network mk-old-k8s-version-201745
	I0214 21:51:15.715520  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Getting to WaitForSSH function...
	I0214 21:51:15.715550  290030 main.go:141] libmachine: (old-k8s-version-201745) reserved static IP address 192.168.72.19 for domain old-k8s-version-201745
	I0214 21:51:15.715563  290030 main.go:141] libmachine: (old-k8s-version-201745) waiting for SSH...
	I0214 21:51:15.718541  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.719032  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745
	I0214 21:51:15.719060  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find defined IP address of network mk-old-k8s-version-201745 interface with MAC address 52:54:00:6d:30:ba
	I0214 21:51:15.719260  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH client type: external
	I0214 21:51:15.719310  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa (-rw-------)
	I0214 21:51:15.719384  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:51:15.719407  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | About to run SSH command:
	I0214 21:51:15.719421  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | exit 0
	I0214 21:51:15.723259  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | SSH cmd err, output: exit status 255: 
	I0214 21:51:15.723284  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0214 21:51:15.723294  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | command : exit 0
	I0214 21:51:15.723305  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | err     : exit status 255
	I0214 21:51:15.723318  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | output  : 
	I0214 21:51:18.723432  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Getting to WaitForSSH function...
	I0214 21:51:18.725873  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.726290  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.726324  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.726416  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH client type: external
	I0214 21:51:18.726440  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa (-rw-------)
	I0214 21:51:18.726492  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:51:18.726516  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | About to run SSH command:
	I0214 21:51:18.726559  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | exit 0
	I0214 21:51:18.854734  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | SSH cmd err, output: <nil>: 
	I0214 21:51:18.854978  290030 main.go:141] libmachine: (old-k8s-version-201745) KVM machine creation complete
	I0214 21:51:18.855264  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:51:18.855878  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:18.856070  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:18.856221  290030 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 21:51:18.856246  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetState
	I0214 21:51:18.857655  290030 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 21:51:18.857667  290030 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 21:51:18.857672  290030 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 21:51:18.857678  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:18.860018  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.860340  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.860362  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.860546  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:18.860711  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.860828  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.860966  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:18.861120  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:18.861388  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:18.861403  290030 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 21:51:18.973467  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:51:18.973488  290030 main.go:141] libmachine: Detecting the provisioner...
	I0214 21:51:18.973498  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:18.975816  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.976116  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.976159  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.976279  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:18.976456  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.976572  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.976662  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:18.976784  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:18.976987  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:18.977004  290030 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 21:51:19.090945  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 21:51:19.091009  290030 main.go:141] libmachine: found compatible host: buildroot
	I0214 21:51:19.091023  290030 main.go:141] libmachine: Provisioning with buildroot...
	I0214 21:51:19.091033  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.091241  290030 buildroot.go:166] provisioning hostname "old-k8s-version-201745"
	I0214 21:51:19.091270  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.091452  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.094065  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.094414  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.094440  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.094593  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.094795  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.094958  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.095110  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.095272  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.095437  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.095454  290030 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-201745 && echo "old-k8s-version-201745" | sudo tee /etc/hostname
	I0214 21:51:19.220062  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-201745
	
	I0214 21:51:19.220089  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.223057  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.223416  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.223447  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.223621  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.223801  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.223975  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.224107  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.224265  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.224482  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.224505  290030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-201745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-201745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-201745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:51:19.343025  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:51:19.343046  290030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:51:19.343072  290030 buildroot.go:174] setting up certificates
	I0214 21:51:19.343085  290030 provision.go:84] configureAuth start
	I0214 21:51:19.343094  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.343305  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:19.345461  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.345781  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.345802  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.346004  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.348488  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.348896  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.348924  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.349075  290030 provision.go:143] copyHostCerts
	I0214 21:51:19.349175  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:51:19.349195  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:51:19.349262  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:51:19.349347  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:51:19.349355  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:51:19.349376  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:51:19.349425  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:51:19.349431  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:51:19.349447  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:51:19.349490  290030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-201745 san=[127.0.0.1 192.168.72.19 localhost minikube old-k8s-version-201745]
	I0214 21:51:19.490071  290030 provision.go:177] copyRemoteCerts
	I0214 21:51:19.490142  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:51:19.490171  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.492319  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.492662  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.492693  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.492871  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.493054  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.493217  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.493348  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:19.580628  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0214 21:51:19.606167  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:51:19.630070  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:51:19.654540  290030 provision.go:87] duration metric: took 311.444497ms to configureAuth
	I0214 21:51:19.654561  290030 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:51:19.654747  290030 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:51:19.654829  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.657192  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.657555  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.657605  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.657786  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.657983  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.658158  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.658304  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.658512  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.658770  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.658789  290030 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:51:19.895793  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:51:19.895830  290030 main.go:141] libmachine: Checking connection to Docker...
	I0214 21:51:19.895843  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetURL
	I0214 21:51:19.897101  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | using libvirt version 6000000
	I0214 21:51:19.899085  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.899443  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.899474  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.899641  290030 main.go:141] libmachine: Docker is up and running!
	I0214 21:51:19.899659  290030 main.go:141] libmachine: Reticulating splines...
	I0214 21:51:19.899668  290030 client.go:171] duration metric: took 24.516107336s to LocalClient.Create
	I0214 21:51:19.899696  290030 start.go:167] duration metric: took 24.516190058s to libmachine.API.Create "old-k8s-version-201745"
	I0214 21:51:19.899707  290030 start.go:293] postStartSetup for "old-k8s-version-201745" (driver="kvm2")
	I0214 21:51:19.899716  290030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:51:19.899733  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:19.899970  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:51:19.899997  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.901854  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.902204  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.902252  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.902409  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.902568  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.902752  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.902925  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:19.988715  290030 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:51:19.992916  290030 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:51:19.992942  290030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:51:19.992999  290030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:51:19.993102  290030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:51:19.993218  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:51:20.002821  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:20.027189  290030 start.go:296] duration metric: took 127.471428ms for postStartSetup
	I0214 21:51:20.027234  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:51:20.027754  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:20.030174  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.030496  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.030541  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.030800  290030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json ...
	I0214 21:51:20.031021  290030 start.go:128] duration metric: took 24.667536425s to createHost
	I0214 21:51:20.031048  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.033286  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.033584  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.033612  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.033720  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.033920  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.034081  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.034221  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.034383  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.034560  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:20.034571  290030 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:51:20.146699  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569880.118420137
	
	I0214 21:51:20.146719  290030 fix.go:216] guest clock: 1739569880.118420137
	I0214 21:51:20.146726  290030 fix.go:229] Guest: 2025-02-14 21:51:20.118420137 +0000 UTC Remote: 2025-02-14 21:51:20.031034691 +0000 UTC m=+89.511546951 (delta=87.385446ms)
	I0214 21:51:20.146742  290030 fix.go:200] guest clock delta is within tolerance: 87.385446ms
	I0214 21:51:20.146747  290030 start.go:83] releasing machines lock for "old-k8s-version-201745", held for 24.783455513s
	I0214 21:51:20.146767  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.146964  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:20.149585  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.149939  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.149964  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.150137  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150597  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150786  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150893  290030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:51:20.150936  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.151011  290030 ssh_runner.go:195] Run: cat /version.json
	I0214 21:51:20.151035  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.153637  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.153678  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.153993  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.154014  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.154038  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.154055  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.154276  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.154357  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.154439  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.154495  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.154585  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.154643  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.154683  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:20.154998  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:20.259516  290030 ssh_runner.go:195] Run: systemctl --version
	I0214 21:51:20.265435  290030 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:51:20.423713  290030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:51:20.430194  290030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:51:20.430249  290030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:51:20.447320  290030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 21:51:20.447350  290030 start.go:495] detecting cgroup driver to use...
	I0214 21:51:20.447403  290030 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:51:20.463539  290030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:51:20.477217  290030 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:51:20.477295  290030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:51:20.490458  290030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:51:20.506265  290030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:51:15.922027  291072 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:51:15.922142  291072 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/config.json ...
	I0214 21:51:15.922171  291072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/config.json: {Name:mkaa14b67318cd4ecf822c04bf015a1ae29f20f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:15.922241  291072 cache.go:107] acquiring lock: {Name:mk7261031ebd9dda8d474e42c99068150207c03b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922289  291072 cache.go:107] acquiring lock: {Name:mk0056d08aa73b465e427212b7548012ece5e613 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922288  291072 cache.go:107] acquiring lock: {Name:mkad0a4ea25c64225fa2e22514c3ee107fda8b09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922289  291072 cache.go:107] acquiring lock: {Name:mked081fac49e10dd02c75d55ff12b5b754e84ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922319  291072 start.go:360] acquireMachinesLock for no-preload-926549: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:51:15.922251  291072 cache.go:107] acquiring lock: {Name:mk20379e9d6953debdb707125fcc222d991f991c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922428  291072 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0214 21:51:15.922445  291072 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.1
	I0214 21:51:15.922412  291072 cache.go:107] acquiring lock: {Name:mk853f7fee1d80fa4e4e93da65acf85fc50df4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922335  291072 cache.go:115] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0214 21:51:15.922455  291072 cache.go:107] acquiring lock: {Name:mka005522b0f200b9b9fb571c35e8a567cb86c7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922487  291072 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0214 21:51:15.922496  291072 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.1
	I0214 21:51:15.922467  291072 cache.go:107] acquiring lock: {Name:mkcd68c024c43cafb2c5f39de9537239a1875fcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:51:15.922587  291072 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.1
	I0214 21:51:15.922492  291072 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 257.92µs
	I0214 21:51:15.922645  291072 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0214 21:51:15.922683  291072 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0214 21:51:15.922741  291072 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0214 21:51:15.923858  291072 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.1
	I0214 21:51:15.923861  291072 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.1
	I0214 21:51:15.923876  291072 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0214 21:51:15.923858  291072 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.1
	I0214 21:51:15.923860  291072 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0214 21:51:15.923863  291072 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.1
	I0214 21:51:15.924088  291072 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0214 21:51:16.089664  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0214 21:51:16.119198  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1
	I0214 21:51:16.123459  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1
	I0214 21:51:16.129879  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0214 21:51:16.129971  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1
	I0214 21:51:16.145153  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0214 21:51:16.178288  291072 cache.go:162] opening:  /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1
	I0214 21:51:16.207782  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0214 21:51:16.207798  291072 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 285.399704ms
	I0214 21:51:16.207806  291072 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0214 21:51:16.515130  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0214 21:51:16.515153  291072 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 592.911925ms
	I0214 21:51:16.515165  291072 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0214 21:51:17.492042  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0214 21:51:17.492070  291072 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 1.56969976s
	I0214 21:51:17.492081  291072 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0214 21:51:17.644642  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0214 21:51:17.644673  291072 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 1.722392844s
	I0214 21:51:17.644691  291072 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0214 21:51:17.701548  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0214 21:51:17.701585  291072 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 1.779308594s
	I0214 21:51:17.701600  291072 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0214 21:51:17.723170  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0214 21:51:17.723215  291072 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.800795566s
	I0214 21:51:17.723227  291072 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0214 21:51:18.019487  291072 cache.go:157] /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0214 21:51:18.019514  291072 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.097225s
	I0214 21:51:18.019525  291072 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0214 21:51:18.019542  291072 cache.go:87] Successfully saved all images to host disk.
	I0214 21:51:20.635362  290030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:51:20.791913  290030 docker.go:233] disabling docker service ...
	I0214 21:51:20.791970  290030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:51:20.808889  290030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:51:20.822017  290030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:51:20.944211  290030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:51:21.080254  290030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:51:21.095875  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:51:21.114491  290030 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0214 21:51:21.114553  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.125025  290030 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:51:21.125074  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.135881  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.146571  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.156909  290030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:51:21.167466  290030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:51:21.176847  290030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 21:51:21.176902  290030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 21:51:21.189597  290030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:51:21.198779  290030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:21.311520  290030 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:51:21.406134  290030 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:51:21.406221  290030 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:51:21.410963  290030 start.go:563] Will wait 60s for crictl version
	I0214 21:51:21.411013  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:21.414903  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:51:21.454276  290030 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:51:21.454366  290030 ssh_runner.go:195] Run: crio --version
	I0214 21:51:21.481275  290030 ssh_runner.go:195] Run: crio --version
	I0214 21:51:21.509347  290030 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0214 21:51:20.170985  290611 machine.go:93] provisionDockerMachine start ...
	I0214 21:51:20.171007  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:20.171251  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.173839  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.174276  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.174312  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.174526  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:20.174720  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.174867  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.175036  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:20.175201  290611 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.175417  290611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:51:20.175429  290611 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:51:20.275659  290611 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-041692
	
	I0214 21:51:20.275685  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:51:20.275897  290611 buildroot.go:166] provisioning hostname "kubernetes-upgrade-041692"
	I0214 21:51:20.275926  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:51:20.276092  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.279156  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.279578  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.279624  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.279742  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:20.279930  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.280106  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.280228  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:20.280412  290611 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.280584  290611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:51:20.280597  290611 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-041692 && echo "kubernetes-upgrade-041692" | sudo tee /etc/hostname
	I0214 21:51:20.400380  290611 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-041692
	
	I0214 21:51:20.400407  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.403103  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.403483  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.403513  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.403636  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:20.403798  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.403962  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.404057  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:20.404223  290611 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.404428  290611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:51:20.404452  290611 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-041692' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-041692/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-041692' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:51:20.508308  290611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:51:20.508338  290611 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:51:20.508362  290611 buildroot.go:174] setting up certificates
	I0214 21:51:20.508374  290611 provision.go:84] configureAuth start
	I0214 21:51:20.508387  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetMachineName
	I0214 21:51:20.508646  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:51:20.511529  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.511951  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.511984  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.512212  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.515192  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.515604  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.515637  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.515779  290611 provision.go:143] copyHostCerts
	I0214 21:51:20.515861  290611 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:51:20.515880  290611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:51:20.515948  290611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:51:20.516079  290611 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:51:20.516097  290611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:51:20.516129  290611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:51:20.516215  290611 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:51:20.516230  290611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:51:20.516263  290611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:51:20.516344  290611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-041692 san=[127.0.0.1 192.168.50.64 kubernetes-upgrade-041692 localhost minikube]
	I0214 21:51:20.714452  290611 provision.go:177] copyRemoteCerts
	I0214 21:51:20.714532  290611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:51:20.714558  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.717189  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.717588  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.717619  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.717762  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:20.717934  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.718096  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:20.718222  290611 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:51:20.798011  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:51:20.827903  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0214 21:51:20.852002  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 21:51:20.876030  290611 provision.go:87] duration metric: took 367.646067ms to configureAuth
	I0214 21:51:20.876055  290611 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:51:20.876223  290611 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:51:20.876308  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:20.879048  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.879511  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:20.879540  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:20.879761  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:20.879942  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.880133  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:20.880311  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:20.880555  290611 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.880767  290611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:51:20.880790  290611 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:51:21.510550  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:21.512928  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:21.513300  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:21.513329  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:21.513550  290030 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0214 21:51:21.517441  290030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:51:21.529465  290030 kubeadm.go:875] updating cluster {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:51:21.529594  290030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:51:21.529640  290030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:21.560058  290030 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:51:21.560112  290030 ssh_runner.go:195] Run: which lz4
	I0214 21:51:21.563873  290030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 21:51:21.567845  290030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 21:51:21.567874  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0214 21:51:23.150723  290030 crio.go:462] duration metric: took 1.586877998s to copy over tarball
	I0214 21:51:23.150796  290030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 21:51:27.167802  291072 start.go:364] duration metric: took 11.245430852s to acquireMachinesLock for "no-preload-926549"
	I0214 21:51:27.167865  291072 start.go:93] Provisioning new machine with config: &{Name:no-preload-926549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 Clust
erName:no-preload-926549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:51:27.168008  291072 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 21:51:25.603024  290030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.452191708s)
	I0214 21:51:25.603059  290030 crio.go:469] duration metric: took 2.452307853s to extract the tarball
	I0214 21:51:25.603069  290030 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 21:51:25.647042  290030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:25.691327  290030 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:51:25.691354  290030 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 21:51:25.691442  290030 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:25.691458  290030 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.691475  290030 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0214 21:51:25.691483  290030 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.691465  290030 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.691449  290030 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.691546  290030 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.691570  290030 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.693361  290030 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.693464  290030 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.693499  290030 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:25.693511  290030 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.693366  290030 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.693788  290030 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.694029  290030 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.694062  290030 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 21:51:25.846856  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.857207  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.858123  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.868799  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.871190  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.883774  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0214 21:51:25.890757  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.937841  290030 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0214 21:51:25.937904  290030 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.937955  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:25.974145  290030 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0214 21:51:25.974181  290030 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.974191  290030 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0214 21:51:25.974225  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:25.974229  290030 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.974271  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.019744  290030 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0214 21:51:26.019798  290030 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.019862  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.025283  290030 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0214 21:51:26.025329  290030 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.025381  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.033932  290030 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0214 21:51:26.033957  290030 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0214 21:51:26.033973  290030 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.033978  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.033989  290030 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 21:51:26.034010  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.034028  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.034066  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.034102  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.034116  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.034080  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.112992  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.160461  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.160467  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.163253  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.163321  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.163352  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.163369  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.215854  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.305663  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.325715  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.325765  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.325778  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.325715  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.325778  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.350427  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0214 21:51:26.489615  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.515867  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.515959  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0214 21:51:26.515998  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0214 21:51:26.516063  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0214 21:51:26.516087  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0214 21:51:26.536809  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:26.539975  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0214 21:51:26.574869  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0214 21:51:26.696128  290030 cache_images.go:92] duration metric: took 1.004755714s to LoadCachedImages
	W0214 21:51:26.696227  290030 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0214 21:51:26.696245  290030 kubeadm.go:926] updating node { 192.168.72.19 8443 v1.20.0 crio true true} ...
	I0214 21:51:26.696373  290030 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-201745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:51:26.696466  290030 ssh_runner.go:195] Run: crio config
	I0214 21:51:26.746613  290030 cni.go:84] Creating CNI manager for ""
	I0214 21:51:26.746658  290030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:51:26.746670  290030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:51:26.746697  290030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-201745 NodeName:old-k8s-version-201745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 21:51:26.746885  290030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-201745"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:51:26.746970  290030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0214 21:51:26.757127  290030 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:51:26.757199  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:51:26.766779  290030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0214 21:51:26.787809  290030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:51:26.805088  290030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0214 21:51:26.824439  290030 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0214 21:51:26.828275  290030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:51:26.840675  290030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:26.964411  290030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:51:26.982471  290030 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745 for IP: 192.168.72.19
	I0214 21:51:26.982493  290030 certs.go:194] generating shared ca certs ...
	I0214 21:51:26.982513  290030 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:26.982702  290030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:51:26.982762  290030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:51:26.982776  290030 certs.go:256] generating profile certs ...
	I0214 21:51:26.982866  290030 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key
	I0214 21:51:26.982883  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt with IP's: []
	I0214 21:51:27.086210  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt ...
	I0214 21:51:27.086243  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt: {Name:mk78690042ad4da1a6a4edca3f1fc615ab233f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.086454  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key ...
	I0214 21:51:27.086476  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key: {Name:mk9dcc9f8bf351125336639900feaa5a54463656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.086614  290030 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282
	I0214 21:51:27.086666  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.19]
	I0214 21:51:27.176437  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 ...
	I0214 21:51:27.176465  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282: {Name:mkbedbb12462578a35a6cf17b6a8d3bfc9a61c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.183135  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282 ...
	I0214 21:51:27.183163  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282: {Name:mk8e3fd4279cbf58b4cf8bc88b52058b57b99cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.183287  290030 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt
	I0214 21:51:27.183414  290030 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key
	I0214 21:51:27.183509  290030 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key
	I0214 21:51:27.183532  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt with IP's: []
	I0214 21:51:27.332957  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt ...
	I0214 21:51:27.332985  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt: {Name:mk0521fddd5fd5b15f245469d92dd539e5ce995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.333186  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key ...
	I0214 21:51:27.333205  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key: {Name:mkb179c5d1b5603349d7002e5cbe42b54cae6bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.333437  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:51:27.333494  290030 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:51:27.333511  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:51:27.333540  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:51:27.333574  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:51:27.333607  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:51:27.333661  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:27.334378  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:51:27.365750  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:51:27.391666  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:51:27.427686  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:51:27.456842  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0214 21:51:27.488943  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 21:51:27.516957  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:51:27.542594  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 21:51:27.573435  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:51:27.601362  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:51:27.626583  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:51:27.653893  290030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:51:27.670757  290030 ssh_runner.go:195] Run: openssl version
	I0214 21:51:27.676763  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:51:27.688401  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.692797  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.692854  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.698895  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:51:27.709384  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:51:27.719630  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.724214  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.724271  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.729857  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:51:27.741344  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:51:27.752386  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.757249  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.757294  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.763492  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:51:27.774762  290030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:51:27.779055  290030 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 21:51:27.779111  290030 kubeadm.go:392] StartCluster: {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:51:27.779208  290030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:51:27.779257  290030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:51:27.837079  290030 cri.go:89] found id: ""
	I0214 21:51:27.837150  290030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:51:27.853732  290030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:51:27.868300  290030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:51:27.879448  290030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:51:27.879473  290030 kubeadm.go:157] found existing configuration files:
	
	I0214 21:51:27.879526  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:51:27.889031  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:51:27.889088  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:51:27.901374  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:51:27.912717  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:51:27.912778  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:51:27.924914  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:51:27.942213  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:51:27.942284  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:51:27.959684  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:51:27.969409  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:51:27.969462  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:51:27.979150  290030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 21:51:28.113879  290030 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 21:51:28.114157  290030 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:51:28.275275  290030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:51:28.275415  290030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:51:28.275590  290030 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 21:51:28.459073  290030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:51:26.926527  290611 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:51:26.926550  290611 machine.go:96] duration metric: took 6.755550266s to provisionDockerMachine
	I0214 21:51:26.926563  290611 start.go:293] postStartSetup for "kubernetes-upgrade-041692" (driver="kvm2")
	I0214 21:51:26.926572  290611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:51:26.926591  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:26.926946  290611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:51:26.926979  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:26.929875  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:26.930204  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:26.930233  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:26.930426  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:26.930649  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:26.930853  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:26.931029  290611 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:51:27.012205  290611 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:51:27.018014  290611 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:51:27.018039  290611 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:51:27.018113  290611 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:51:27.018222  290611 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:51:27.018338  290611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:51:27.031184  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:27.057837  290611 start.go:296] duration metric: took 131.259984ms for postStartSetup
	I0214 21:51:27.057885  290611 fix.go:56] duration metric: took 6.910998775s for fixHost
	I0214 21:51:27.057912  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:27.060870  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.061250  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:27.061298  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.061475  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:27.061670  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:27.061832  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:27.061981  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:27.062161  290611 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:27.062369  290611 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0214 21:51:27.062382  290611 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:51:27.167646  290611 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569887.155843620
	
	I0214 21:51:27.167671  290611 fix.go:216] guest clock: 1739569887.155843620
	I0214 21:51:27.167680  290611 fix.go:229] Guest: 2025-02-14 21:51:27.15584362 +0000 UTC Remote: 2025-02-14 21:51:27.057891226 +0000 UTC m=+37.588582056 (delta=97.952394ms)
	I0214 21:51:27.167708  290611 fix.go:200] guest clock delta is within tolerance: 97.952394ms
	I0214 21:51:27.167715  290611 start.go:83] releasing machines lock for "kubernetes-upgrade-041692", held for 7.020853837s
	I0214 21:51:27.167743  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:27.168094  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:51:27.171468  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.171856  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:27.171884  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.172061  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:27.172650  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:27.172857  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .DriverName
	I0214 21:51:27.172962  290611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:51:27.173039  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:27.173057  290611 ssh_runner.go:195] Run: cat /version.json
	I0214 21:51:27.173080  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHHostname
	I0214 21:51:27.176382  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.176678  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.176900  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:27.176930  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.177054  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:27.177087  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:27.177295  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:27.177353  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHPort
	I0214 21:51:27.177490  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:27.177551  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHKeyPath
	I0214 21:51:27.177610  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:27.177670  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetSSHUsername
	I0214 21:51:27.177770  290611 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:51:27.177803  290611 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/kubernetes-upgrade-041692/id_rsa Username:docker}
	I0214 21:51:27.256394  290611 ssh_runner.go:195] Run: systemctl --version
	I0214 21:51:27.286432  290611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:51:27.447250  290611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:51:27.454069  290611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:51:27.454138  290611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:51:27.464223  290611 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 21:51:27.464248  290611 start.go:495] detecting cgroup driver to use...
	I0214 21:51:27.464324  290611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:51:27.489392  290611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:51:27.512699  290611 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:51:27.512758  290611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:51:27.533439  290611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:51:27.557888  290611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:51:27.768151  290611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:51:27.948631  290611 docker.go:233] disabling docker service ...
	I0214 21:51:27.948705  290611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:51:28.032163  290611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:51:28.058777  290611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:51:28.245925  290611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:51:28.411985  290611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:51:28.439532  290611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:51:28.473725  290611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 21:51:28.473800  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.489382  290611 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:51:28.489451  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.504801  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.519723  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.534575  290611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:51:28.550410  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.565311  290611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.584019  290611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:28.626778  290611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:51:28.646595  290611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:51:28.661002  290611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:28.853640  290611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:51:29.368961  290611 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:51:29.369046  290611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:51:29.374847  290611 start.go:563] Will wait 60s for crictl version
	I0214 21:51:29.374925  290611 ssh_runner.go:195] Run: which crictl
	I0214 21:51:29.379182  290611 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:51:29.415704  290611 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:51:29.415791  290611 ssh_runner.go:195] Run: crio --version
	I0214 21:51:29.447134  290611 ssh_runner.go:195] Run: crio --version
	I0214 21:51:29.478378  290611 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 21:51:29.479562  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) Calling .GetIP
	I0214 21:51:29.482434  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:29.482813  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:95:40", ip: ""} in network mk-kubernetes-upgrade-041692: {Iface:virbr2 ExpiryTime:2025-02-14 22:50:22 +0000 UTC Type:0 Mac:52:54:00:a1:95:40 Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:kubernetes-upgrade-041692 Clientid:01:52:54:00:a1:95:40}
	I0214 21:51:29.482843  290611 main.go:141] libmachine: (kubernetes-upgrade-041692) DBG | domain kubernetes-upgrade-041692 has defined IP address 192.168.50.64 and MAC address 52:54:00:a1:95:40 in network mk-kubernetes-upgrade-041692
	I0214 21:51:29.483067  290611 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0214 21:51:29.487449  290611 kubeadm.go:875] updating cluster {Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kube
rnetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:51:29.487548  290611 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:51:29.487600  290611 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:28.571529  290030 out.go:235]   - Generating certificates and keys ...
	I0214 21:51:28.571668  290030 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:51:28.571801  290030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:51:28.571917  290030 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 21:51:28.781276  290030 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 21:51:28.889122  290030 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 21:51:29.037057  290030 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 21:51:29.163037  290030 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 21:51:29.163491  290030 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	I0214 21:51:29.328250  290030 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 21:51:29.328454  290030 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	I0214 21:51:29.525978  290030 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 21:51:29.691207  290030 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 21:51:29.820111  290030 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 21:51:29.820488  290030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:51:29.977726  290030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:51:30.129358  290030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:51:30.278856  290030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:51:30.408865  290030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:51:30.435199  290030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:51:30.436112  290030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:51:30.436182  290030 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:51:30.584138  290030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:51:27.223178  291072 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0214 21:51:27.223433  291072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:51:27.223477  291072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:51:27.239376  291072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0214 21:51:27.239792  291072 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:51:27.240417  291072 main.go:141] libmachine: Using API Version  1
	I0214 21:51:27.240443  291072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:51:27.240773  291072 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:51:27.240971  291072 main.go:141] libmachine: (no-preload-926549) Calling .GetMachineName
	I0214 21:51:27.241156  291072 main.go:141] libmachine: (no-preload-926549) Calling .DriverName
	I0214 21:51:27.241368  291072 start.go:159] libmachine.API.Create for "no-preload-926549" (driver="kvm2")
	I0214 21:51:27.241399  291072 client.go:168] LocalClient.Create starting
	I0214 21:51:27.241432  291072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 21:51:27.241468  291072 main.go:141] libmachine: Decoding PEM data...
	I0214 21:51:27.241489  291072 main.go:141] libmachine: Parsing certificate...
	I0214 21:51:27.241582  291072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 21:51:27.241612  291072 main.go:141] libmachine: Decoding PEM data...
	I0214 21:51:27.241628  291072 main.go:141] libmachine: Parsing certificate...
	I0214 21:51:27.241651  291072 main.go:141] libmachine: Running pre-create checks...
	I0214 21:51:27.241664  291072 main.go:141] libmachine: (no-preload-926549) Calling .PreCreateCheck
	I0214 21:51:27.241978  291072 main.go:141] libmachine: (no-preload-926549) Calling .GetConfigRaw
	I0214 21:51:27.242436  291072 main.go:141] libmachine: Creating machine...
	I0214 21:51:27.242451  291072 main.go:141] libmachine: (no-preload-926549) Calling .Create
	I0214 21:51:27.242579  291072 main.go:141] libmachine: (no-preload-926549) creating KVM machine...
	I0214 21:51:27.242597  291072 main.go:141] libmachine: (no-preload-926549) creating network...
	I0214 21:51:27.243757  291072 main.go:141] libmachine: (no-preload-926549) DBG | found existing default KVM network
	I0214 21:51:27.245805  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:27.245617  291168 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002fc050}
	I0214 21:51:27.245826  291072 main.go:141] libmachine: (no-preload-926549) DBG | created network xml: 
	I0214 21:51:27.245842  291072 main.go:141] libmachine: (no-preload-926549) DBG | <network>
	I0214 21:51:27.245856  291072 main.go:141] libmachine: (no-preload-926549) DBG |   <name>mk-no-preload-926549</name>
	I0214 21:51:27.245872  291072 main.go:141] libmachine: (no-preload-926549) DBG |   <dns enable='no'/>
	I0214 21:51:27.245884  291072 main.go:141] libmachine: (no-preload-926549) DBG |   
	I0214 21:51:27.245895  291072 main.go:141] libmachine: (no-preload-926549) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0214 21:51:27.245904  291072 main.go:141] libmachine: (no-preload-926549) DBG |     <dhcp>
	I0214 21:51:27.245913  291072 main.go:141] libmachine: (no-preload-926549) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0214 21:51:27.245934  291072 main.go:141] libmachine: (no-preload-926549) DBG |     </dhcp>
	I0214 21:51:27.245941  291072 main.go:141] libmachine: (no-preload-926549) DBG |   </ip>
	I0214 21:51:27.245952  291072 main.go:141] libmachine: (no-preload-926549) DBG |   
	I0214 21:51:27.245959  291072 main.go:141] libmachine: (no-preload-926549) DBG | </network>
	I0214 21:51:27.245968  291072 main.go:141] libmachine: (no-preload-926549) DBG | 
	I0214 21:51:27.387285  291072 main.go:141] libmachine: (no-preload-926549) DBG | trying to create private KVM network mk-no-preload-926549 192.168.39.0/24...
	I0214 21:51:27.470534  291072 main.go:141] libmachine: (no-preload-926549) DBG | private KVM network mk-no-preload-926549 192.168.39.0/24 created
	I0214 21:51:27.470574  291072 main.go:141] libmachine: (no-preload-926549) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549 ...
	I0214 21:51:27.470650  291072 main.go:141] libmachine: (no-preload-926549) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 21:51:27.470763  291072 main.go:141] libmachine: (no-preload-926549) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 21:51:27.470788  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:27.470564  291168 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:51:27.880807  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:27.880683  291168 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549/id_rsa...
	I0214 21:51:28.046864  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:28.046743  291168 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549/no-preload-926549.rawdisk...
	I0214 21:51:28.046999  291072 main.go:141] libmachine: (no-preload-926549) DBG | Writing magic tar header
	I0214 21:51:28.047032  291072 main.go:141] libmachine: (no-preload-926549) DBG | Writing SSH key tar header
	I0214 21:51:28.047122  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:28.047058  291168 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549 ...
	I0214 21:51:28.047314  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549
	I0214 21:51:28.047358  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 21:51:28.047376  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549 (perms=drwx------)
	I0214 21:51:28.047410  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:51:28.047452  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 21:51:28.047464  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 21:51:28.047480  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 21:51:28.047488  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home/jenkins
	I0214 21:51:28.047499  291072 main.go:141] libmachine: (no-preload-926549) DBG | checking permissions on dir: /home
	I0214 21:51:28.047507  291072 main.go:141] libmachine: (no-preload-926549) DBG | skipping /home - not owner
	I0214 21:51:28.047527  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 21:51:28.047543  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 21:51:28.047553  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 21:51:28.047562  291072 main.go:141] libmachine: (no-preload-926549) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 21:51:28.047573  291072 main.go:141] libmachine: (no-preload-926549) creating domain...
	I0214 21:51:28.048911  291072 main.go:141] libmachine: (no-preload-926549) define libvirt domain using xml: 
	I0214 21:51:28.048930  291072 main.go:141] libmachine: (no-preload-926549) <domain type='kvm'>
	I0214 21:51:28.048952  291072 main.go:141] libmachine: (no-preload-926549)   <name>no-preload-926549</name>
	I0214 21:51:28.048960  291072 main.go:141] libmachine: (no-preload-926549)   <memory unit='MiB'>2200</memory>
	I0214 21:51:28.048972  291072 main.go:141] libmachine: (no-preload-926549)   <vcpu>2</vcpu>
	I0214 21:51:28.048979  291072 main.go:141] libmachine: (no-preload-926549)   <features>
	I0214 21:51:28.048989  291072 main.go:141] libmachine: (no-preload-926549)     <acpi/>
	I0214 21:51:28.048998  291072 main.go:141] libmachine: (no-preload-926549)     <apic/>
	I0214 21:51:28.049007  291072 main.go:141] libmachine: (no-preload-926549)     <pae/>
	I0214 21:51:28.049017  291072 main.go:141] libmachine: (no-preload-926549)     
	I0214 21:51:28.049025  291072 main.go:141] libmachine: (no-preload-926549)   </features>
	I0214 21:51:28.049032  291072 main.go:141] libmachine: (no-preload-926549)   <cpu mode='host-passthrough'>
	I0214 21:51:28.049041  291072 main.go:141] libmachine: (no-preload-926549)   
	I0214 21:51:28.049048  291072 main.go:141] libmachine: (no-preload-926549)   </cpu>
	I0214 21:51:28.049059  291072 main.go:141] libmachine: (no-preload-926549)   <os>
	I0214 21:51:28.049069  291072 main.go:141] libmachine: (no-preload-926549)     <type>hvm</type>
	I0214 21:51:28.049077  291072 main.go:141] libmachine: (no-preload-926549)     <boot dev='cdrom'/>
	I0214 21:51:28.049091  291072 main.go:141] libmachine: (no-preload-926549)     <boot dev='hd'/>
	I0214 21:51:28.049102  291072 main.go:141] libmachine: (no-preload-926549)     <bootmenu enable='no'/>
	I0214 21:51:28.049110  291072 main.go:141] libmachine: (no-preload-926549)   </os>
	I0214 21:51:28.049115  291072 main.go:141] libmachine: (no-preload-926549)   <devices>
	I0214 21:51:28.049120  291072 main.go:141] libmachine: (no-preload-926549)     <disk type='file' device='cdrom'>
	I0214 21:51:28.049130  291072 main.go:141] libmachine: (no-preload-926549)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549/boot2docker.iso'/>
	I0214 21:51:28.049136  291072 main.go:141] libmachine: (no-preload-926549)       <target dev='hdc' bus='scsi'/>
	I0214 21:51:28.049142  291072 main.go:141] libmachine: (no-preload-926549)       <readonly/>
	I0214 21:51:28.049147  291072 main.go:141] libmachine: (no-preload-926549)     </disk>
	I0214 21:51:28.049154  291072 main.go:141] libmachine: (no-preload-926549)     <disk type='file' device='disk'>
	I0214 21:51:28.049160  291072 main.go:141] libmachine: (no-preload-926549)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 21:51:28.049171  291072 main.go:141] libmachine: (no-preload-926549)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/no-preload-926549/no-preload-926549.rawdisk'/>
	I0214 21:51:28.049178  291072 main.go:141] libmachine: (no-preload-926549)       <target dev='hda' bus='virtio'/>
	I0214 21:51:28.049189  291072 main.go:141] libmachine: (no-preload-926549)     </disk>
	I0214 21:51:28.049196  291072 main.go:141] libmachine: (no-preload-926549)     <interface type='network'>
	I0214 21:51:28.049207  291072 main.go:141] libmachine: (no-preload-926549)       <source network='mk-no-preload-926549'/>
	I0214 21:51:28.049215  291072 main.go:141] libmachine: (no-preload-926549)       <model type='virtio'/>
	I0214 21:51:28.049223  291072 main.go:141] libmachine: (no-preload-926549)     </interface>
	I0214 21:51:28.049242  291072 main.go:141] libmachine: (no-preload-926549)     <interface type='network'>
	I0214 21:51:28.049255  291072 main.go:141] libmachine: (no-preload-926549)       <source network='default'/>
	I0214 21:51:28.049265  291072 main.go:141] libmachine: (no-preload-926549)       <model type='virtio'/>
	I0214 21:51:28.049273  291072 main.go:141] libmachine: (no-preload-926549)     </interface>
	I0214 21:51:28.049283  291072 main.go:141] libmachine: (no-preload-926549)     <serial type='pty'>
	I0214 21:51:28.049291  291072 main.go:141] libmachine: (no-preload-926549)       <target port='0'/>
	I0214 21:51:28.049299  291072 main.go:141] libmachine: (no-preload-926549)     </serial>
	I0214 21:51:28.049304  291072 main.go:141] libmachine: (no-preload-926549)     <console type='pty'>
	I0214 21:51:28.049311  291072 main.go:141] libmachine: (no-preload-926549)       <target type='serial' port='0'/>
	I0214 21:51:28.049323  291072 main.go:141] libmachine: (no-preload-926549)     </console>
	I0214 21:51:28.049330  291072 main.go:141] libmachine: (no-preload-926549)     <rng model='virtio'>
	I0214 21:51:28.049339  291072 main.go:141] libmachine: (no-preload-926549)       <backend model='random'>/dev/random</backend>
	I0214 21:51:28.049346  291072 main.go:141] libmachine: (no-preload-926549)     </rng>
	I0214 21:51:28.049353  291072 main.go:141] libmachine: (no-preload-926549)     
	I0214 21:51:28.049359  291072 main.go:141] libmachine: (no-preload-926549)     
	I0214 21:51:28.049367  291072 main.go:141] libmachine: (no-preload-926549)   </devices>
	I0214 21:51:28.049375  291072 main.go:141] libmachine: (no-preload-926549) </domain>
	I0214 21:51:28.049385  291072 main.go:141] libmachine: (no-preload-926549) 
	I0214 21:51:28.108462  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:61:02:ae in network default
	I0214 21:51:28.109311  291072 main.go:141] libmachine: (no-preload-926549) starting domain...
	I0214 21:51:28.109343  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:28.109356  291072 main.go:141] libmachine: (no-preload-926549) ensuring networks are active...
	I0214 21:51:28.110326  291072 main.go:141] libmachine: (no-preload-926549) Ensuring network default is active
	I0214 21:51:28.110783  291072 main.go:141] libmachine: (no-preload-926549) Ensuring network mk-no-preload-926549 is active
	I0214 21:51:28.111559  291072 main.go:141] libmachine: (no-preload-926549) getting domain XML...
	I0214 21:51:28.112465  291072 main.go:141] libmachine: (no-preload-926549) creating domain...
	I0214 21:51:28.887726  291072 main.go:141] libmachine: (no-preload-926549) waiting for IP...
	I0214 21:51:28.888767  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:28.889470  291072 main.go:141] libmachine: (no-preload-926549) DBG | unable to find current IP address of domain no-preload-926549 in network mk-no-preload-926549
	I0214 21:51:28.889631  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:28.889558  291168 retry.go:31] will retry after 267.964428ms: waiting for domain to come up
	I0214 21:51:29.159383  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:29.159897  291072 main.go:141] libmachine: (no-preload-926549) DBG | unable to find current IP address of domain no-preload-926549 in network mk-no-preload-926549
	I0214 21:51:29.159933  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:29.159866  291168 retry.go:31] will retry after 369.653041ms: waiting for domain to come up
	I0214 21:51:29.531598  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:29.532130  291072 main.go:141] libmachine: (no-preload-926549) DBG | unable to find current IP address of domain no-preload-926549 in network mk-no-preload-926549
	I0214 21:51:29.532218  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:29.532134  291168 retry.go:31] will retry after 429.987438ms: waiting for domain to come up
	I0214 21:51:29.963897  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:29.964477  291072 main.go:141] libmachine: (no-preload-926549) DBG | unable to find current IP address of domain no-preload-926549 in network mk-no-preload-926549
	I0214 21:51:29.964500  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:29.964435  291168 retry.go:31] will retry after 520.398062ms: waiting for domain to come up
	I0214 21:51:30.486116  291072 main.go:141] libmachine: (no-preload-926549) DBG | domain no-preload-926549 has defined MAC address 52:54:00:4e:c0:63 in network mk-no-preload-926549
	I0214 21:51:30.486856  291072 main.go:141] libmachine: (no-preload-926549) DBG | unable to find current IP address of domain no-preload-926549 in network mk-no-preload-926549
	I0214 21:51:30.486889  291072 main.go:141] libmachine: (no-preload-926549) DBG | I0214 21:51:30.486841  291168 retry.go:31] will retry after 530.517629ms: waiting for domain to come up
	I0214 21:51:29.536266  290611 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:51:29.536288  290611 crio.go:433] Images already preloaded, skipping extraction
	I0214 21:51:29.536349  290611 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:29.571885  290611 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:51:29.571910  290611 cache_images.go:84] Images are preloaded, skipping loading
	I0214 21:51:29.571919  290611 kubeadm.go:926] updating node { 192.168.50.64 8443 v1.32.1 crio true true} ...
	I0214 21:51:29.572035  290611 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-041692 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.64
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:51:29.572110  290611 ssh_runner.go:195] Run: crio config
	I0214 21:51:29.631860  290611 cni.go:84] Creating CNI manager for ""
	I0214 21:51:29.631891  290611 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:51:29.631924  290611 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:51:29.631959  290611 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.64 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-041692 NodeName:kubernetes-upgrade-041692 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.64 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 21:51:29.632170  290611 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-041692"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.64"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.64"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:51:29.632253  290611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 21:51:29.643263  290611 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:51:29.643347  290611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:51:29.653736  290611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0214 21:51:29.673328  290611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:51:29.693686  290611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I0214 21:51:29.715912  290611 ssh_runner.go:195] Run: grep 192.168.50.64	control-plane.minikube.internal$ /etc/hosts
	I0214 21:51:29.723562  290611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:29.876429  290611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:51:29.894182  290611 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692 for IP: 192.168.50.64
	I0214 21:51:29.894202  290611 certs.go:194] generating shared ca certs ...
	I0214 21:51:29.894223  290611 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:29.894414  290611 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:51:29.894468  290611 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:51:29.894479  290611 certs.go:256] generating profile certs ...
	I0214 21:51:29.894591  290611 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/client.key
	I0214 21:51:29.894672  290611 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key.6fdfc3ce
	I0214 21:51:29.894723  290611 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key
	I0214 21:51:29.894886  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:51:29.894931  290611 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:51:29.894943  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:51:29.894979  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:51:29.895013  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:51:29.895041  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:51:29.895091  290611 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:29.895894  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:51:29.928059  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:51:29.960067  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:51:29.989680  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:51:30.065172  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0214 21:51:30.095522  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 21:51:30.119841  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:51:30.155097  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kubernetes-upgrade-041692/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 21:51:30.188254  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:51:30.214500  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:51:30.241900  290611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:51:30.268129  290611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:51:30.286644  290611 ssh_runner.go:195] Run: openssl version
	I0214 21:51:30.292874  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:51:30.305079  290611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:51:30.310118  290611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:51:30.310174  290611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:51:30.318549  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:51:30.328819  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:51:30.342852  290611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:30.348649  290611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:30.348716  290611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:30.356106  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:51:30.368174  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:51:30.382311  290611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:51:30.387792  290611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:51:30.387839  290611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:51:30.394087  290611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:51:30.406744  290611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:51:30.412778  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 21:51:30.419221  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 21:51:30.425440  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 21:51:30.432954  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 21:51:30.439609  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 21:51:30.445711  290611 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 21:51:30.451671  290611 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-041692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kuberne
tes-upgrade-041692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:51:30.451784  290611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:51:30.451829  290611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:51:30.498732  290611 cri.go:89] found id: "66a2eaee4bd7addddd573ba8908eaa521107aecbf8dc0b0eef96f0f9ff21f8ef"
	I0214 21:51:30.498762  290611 cri.go:89] found id: "430dd750d640ce8bf6118565d2d379585b027040bdf287ed4b407f2a4f6b2099"
	I0214 21:51:30.498769  290611 cri.go:89] found id: "37c714ae1e8ff919808ff5f873e4bfbe59b5cc1d863759eee787aaed566df88d"
	I0214 21:51:30.498775  290611 cri.go:89] found id: "56461f1dcd1247929b8796d488284f214d46b6ae1e86357b4557fdf57974e148"
	I0214 21:51:30.498779  290611 cri.go:89] found id: "f81030b1a3496682011776f95b553f2acc29813c0eca2418bdbf2cc1868066f3"
	I0214 21:51:30.498786  290611 cri.go:89] found id: "e5ba60a5c3a703031eacd1fe665571bc44102b94d29b6aa422e81bbd8bb34800"
	I0214 21:51:30.498790  290611 cri.go:89] found id: "f1a63457f6e438f49d432b05d1f9eac03919614a3ab1cb4fe7bd6c4f57d54fe0"
	I0214 21:51:30.498794  290611 cri.go:89] found id: "aad785097304d68e3613553f45d2422ea392473b9065ca822924af2027c1672a"
	I0214 21:51:30.498798  290611 cri.go:89] found id: ""
	I0214 21:51:30.498857  290611 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-041692 -n kubernetes-upgrade-041692
E0214 21:51:42.290595  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-041692 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-041692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-041692
--- FAIL: TestKubernetesUpgrade (433.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-865564 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-865564 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.202223318s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-865564] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-865564" primary control-plane node in "pause-865564" cluster
	* Updating the running kvm2 "pause-865564" VM ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-865564" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:47:38.475877  285551 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:47:38.475965  285551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:47:38.475973  285551 out.go:358] Setting ErrFile to fd 2...
	I0214 21:47:38.475977  285551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:47:38.476146  285551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:47:38.476616  285551 out.go:352] Setting JSON to false
	I0214 21:47:38.477516  285551 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9002,"bootTime":1739560656,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:47:38.477605  285551 start.go:140] virtualization: kvm guest
	I0214 21:47:38.480502  285551 out.go:177] * [pause-865564] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:47:38.481762  285551 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:47:38.481764  285551 notify.go:220] Checking for updates...
	I0214 21:47:38.484150  285551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:47:38.485428  285551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:47:38.487088  285551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:47:38.488218  285551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:47:38.489333  285551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:47:38.491010  285551 config.go:182] Loaded profile config "pause-865564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:47:38.491549  285551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:47:38.491611  285551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:47:38.506815  285551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0214 21:47:38.507175  285551 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:47:38.507659  285551 main.go:141] libmachine: Using API Version  1
	I0214 21:47:38.507682  285551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:47:38.508006  285551 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:47:38.508179  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:38.508408  285551 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:47:38.508688  285551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:47:38.508720  285551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:47:38.522528  285551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0214 21:47:38.522948  285551 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:47:38.523421  285551 main.go:141] libmachine: Using API Version  1
	I0214 21:47:38.523444  285551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:47:38.523749  285551 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:47:38.523934  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:38.557422  285551 out.go:177] * Using the kvm2 driver based on existing profile
	I0214 21:47:38.558532  285551 start.go:304] selected driver: kvm2
	I0214 21:47:38.558555  285551 start.go:908] validating driver "kvm2" against &{Name:pause-865564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pa
use-865564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:47:38.558726  285551 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:47:38.559069  285551 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:47:38.559153  285551 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:47:38.574132  285551 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:47:38.574934  285551 cni.go:84] Creating CNI manager for ""
	I0214 21:47:38.575007  285551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:47:38.575086  285551 start.go:347] cluster config:
	{Name:pause-865564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-865564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:47:38.575235  285551 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:47:38.576649  285551 out.go:177] * Starting "pause-865564" primary control-plane node in "pause-865564" cluster
	I0214 21:47:38.577913  285551 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:47:38.577947  285551 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 21:47:38.577957  285551 cache.go:56] Caching tarball of preloaded images
	I0214 21:47:38.578030  285551 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 21:47:38.578042  285551 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 21:47:38.578146  285551 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/config.json ...
	I0214 21:47:38.578321  285551 start.go:360] acquireMachinesLock for pause-865564: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:47:48.594881  285551 start.go:364] duration metric: took 10.016517744s to acquireMachinesLock for "pause-865564"
	I0214 21:47:48.594922  285551 start.go:96] Skipping create...Using existing machine configuration
	I0214 21:47:48.594928  285551 fix.go:54] fixHost starting: 
	I0214 21:47:48.595344  285551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:47:48.595398  285551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:47:48.612818  285551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0214 21:47:48.613303  285551 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:47:48.613821  285551 main.go:141] libmachine: Using API Version  1
	I0214 21:47:48.613844  285551 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:47:48.614248  285551 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:47:48.614438  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:48.614605  285551 main.go:141] libmachine: (pause-865564) Calling .GetState
	I0214 21:47:48.616124  285551 fix.go:112] recreateIfNeeded on pause-865564: state=Running err=<nil>
	W0214 21:47:48.616143  285551 fix.go:138] unexpected machine state, will restart: <nil>
	I0214 21:47:48.618021  285551 out.go:177] * Updating the running kvm2 "pause-865564" VM ...
	I0214 21:47:48.619317  285551 machine.go:93] provisionDockerMachine start ...
	I0214 21:47:48.619342  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:48.619518  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:48.622134  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.622536  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:48.622563  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.622729  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:48.622899  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.623051  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.623206  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:48.623427  285551 main.go:141] libmachine: Using SSH client type: native
	I0214 21:47:48.623707  285551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I0214 21:47:48.623729  285551 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:47:48.743006  285551 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-865564
	
	I0214 21:47:48.743041  285551 main.go:141] libmachine: (pause-865564) Calling .GetMachineName
	I0214 21:47:48.743305  285551 buildroot.go:166] provisioning hostname "pause-865564"
	I0214 21:47:48.743333  285551 main.go:141] libmachine: (pause-865564) Calling .GetMachineName
	I0214 21:47:48.743524  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:48.746472  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.746915  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:48.746940  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.747035  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:48.747205  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.747348  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.747471  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:48.747633  285551 main.go:141] libmachine: Using SSH client type: native
	I0214 21:47:48.747841  285551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I0214 21:47:48.747857  285551 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-865564 && echo "pause-865564" | sudo tee /etc/hostname
	I0214 21:47:48.871116  285551 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-865564
	
	I0214 21:47:48.871150  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:48.873989  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.874457  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:48.874490  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.874710  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:48.874924  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.875091  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:48.875230  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:48.875388  285551 main.go:141] libmachine: Using SSH client type: native
	I0214 21:47:48.875542  285551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I0214 21:47:48.875558  285551 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-865564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-865564/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-865564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:47:48.991420  285551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:47:48.991446  285551 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:47:48.991462  285551 buildroot.go:174] setting up certificates
	I0214 21:47:48.991469  285551 provision.go:84] configureAuth start
	I0214 21:47:48.991479  285551 main.go:141] libmachine: (pause-865564) Calling .GetMachineName
	I0214 21:47:48.991744  285551 main.go:141] libmachine: (pause-865564) Calling .GetIP
	I0214 21:47:48.994594  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.995007  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:48.995037  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.995187  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:48.997411  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.997705  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:48.997728  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:48.997900  285551 provision.go:143] copyHostCerts
	I0214 21:47:48.997961  285551 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:47:48.997974  285551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:47:48.998025  285551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:47:48.998128  285551 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:47:48.998136  285551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:47:48.998168  285551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:47:48.998235  285551 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:47:48.998242  285551 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:47:48.998259  285551 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:47:48.998307  285551 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.pause-865564 san=[127.0.0.1 192.168.72.173 localhost minikube pause-865564]
	I0214 21:47:49.112823  285551 provision.go:177] copyRemoteCerts
	I0214 21:47:49.112882  285551 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:47:49.112911  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:49.115542  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:49.115923  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:49.115957  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:49.116103  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:49.116304  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:49.116481  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:49.116646  285551 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/pause-865564/id_rsa Username:docker}
	I0214 21:47:49.205607  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 21:47:49.233422  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:47:49.260909  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:47:49.284903  285551 provision.go:87] duration metric: took 293.42445ms to configureAuth
	I0214 21:47:49.284922  285551 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:47:49.285139  285551 config.go:182] Loaded profile config "pause-865564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:47:49.285218  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:49.288050  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:49.288489  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:49.288518  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:49.288679  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:49.288876  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:49.289053  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:49.289232  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:49.289454  285551 main.go:141] libmachine: Using SSH client type: native
	I0214 21:47:49.289721  285551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I0214 21:47:49.289745  285551 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:47:54.839768  285551 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:47:54.839811  285551 machine.go:96] duration metric: took 6.22047069s to provisionDockerMachine
	I0214 21:47:54.839834  285551 start.go:293] postStartSetup for "pause-865564" (driver="kvm2")
	I0214 21:47:54.839849  285551 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:47:54.839881  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:54.840358  285551 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:47:54.840390  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:54.843209  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:54.843663  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:54.843699  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:54.843851  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:54.844053  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:54.844232  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:54.844387  285551 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/pause-865564/id_rsa Username:docker}
	I0214 21:47:54.938654  285551 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:47:54.942908  285551 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:47:54.942936  285551 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:47:54.942992  285551 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:47:54.943090  285551 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:47:54.943201  285551 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:47:54.952655  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:47:54.976554  285551 start.go:296] duration metric: took 136.706122ms for postStartSetup
	I0214 21:47:54.976590  285551 fix.go:56] duration metric: took 6.381661819s for fixHost
	I0214 21:47:54.976618  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:54.979609  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:54.980009  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:54.980043  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:54.980190  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:54.980408  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:54.980575  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:54.980738  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:54.980934  285551 main.go:141] libmachine: Using SSH client type: native
	I0214 21:47:54.981109  285551 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.173 22 <nil> <nil>}
	I0214 21:47:54.981126  285551 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:47:55.101782  285551 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569675.058893695
	
	I0214 21:47:55.101807  285551 fix.go:216] guest clock: 1739569675.058893695
	I0214 21:47:55.101818  285551 fix.go:229] Guest: 2025-02-14 21:47:55.058893695 +0000 UTC Remote: 2025-02-14 21:47:54.976594869 +0000 UTC m=+16.538118414 (delta=82.298826ms)
	I0214 21:47:55.101874  285551 fix.go:200] guest clock delta is within tolerance: 82.298826ms
	I0214 21:47:55.101888  285551 start.go:83] releasing machines lock for "pause-865564", held for 6.506978715s
	I0214 21:47:55.101916  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:55.102134  285551 main.go:141] libmachine: (pause-865564) Calling .GetIP
	I0214 21:47:55.104946  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.105345  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:55.105388  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.105549  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:55.106086  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:55.106255  285551 main.go:141] libmachine: (pause-865564) Calling .DriverName
	I0214 21:47:55.106359  285551 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:47:55.106405  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:55.106528  285551 ssh_runner.go:195] Run: cat /version.json
	I0214 21:47:55.106555  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHHostname
	I0214 21:47:55.109606  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.109844  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.109968  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:55.109996  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.110166  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:55.110256  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:55.110281  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:55.110341  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:55.110400  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHPort
	I0214 21:47:55.110534  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:55.110539  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHKeyPath
	I0214 21:47:55.110684  285551 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/pause-865564/id_rsa Username:docker}
	I0214 21:47:55.110729  285551 main.go:141] libmachine: (pause-865564) Calling .GetSSHUsername
	I0214 21:47:55.110868  285551 sshutil.go:53] new ssh client: &{IP:192.168.72.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/pause-865564/id_rsa Username:docker}
	I0214 21:47:55.212938  285551 ssh_runner.go:195] Run: systemctl --version
	I0214 21:47:55.219000  285551 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:47:55.376420  285551 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:47:55.390695  285551 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:47:55.390789  285551 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:47:55.409961  285551 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0214 21:47:55.409984  285551 start.go:495] detecting cgroup driver to use...
	I0214 21:47:55.410046  285551 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:47:55.429027  285551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:47:55.450987  285551 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:47:55.451036  285551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:47:55.477558  285551 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:47:55.496635  285551 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:47:55.631667  285551 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:47:55.767467  285551 docker.go:233] disabling docker service ...
	I0214 21:47:55.767573  285551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:47:55.783551  285551 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:47:55.799153  285551 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:47:55.963753  285551 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:47:56.175531  285551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:47:56.202857  285551 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:47:56.275229  285551 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 21:47:56.275308  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.295861  285551 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:47:56.295946  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.318802  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.348638  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.371234  285551 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:47:56.391016  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.418721  285551 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.440327  285551 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:47:56.457222  285551 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:47:56.466972  285551 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:47:56.480765  285551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:47:56.712287  285551 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:47:57.550454  285551 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:47:57.550603  285551 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:47:57.557822  285551 start.go:563] Will wait 60s for crictl version
	I0214 21:47:57.557885  285551 ssh_runner.go:195] Run: which crictl
	I0214 21:47:57.562061  285551 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:47:57.599155  285551 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:47:57.599238  285551 ssh_runner.go:195] Run: crio --version
	I0214 21:47:57.639342  285551 ssh_runner.go:195] Run: crio --version
	I0214 21:47:57.671749  285551 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 21:47:57.672939  285551 main.go:141] libmachine: (pause-865564) Calling .GetIP
	I0214 21:47:57.676063  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:57.676572  285551 main.go:141] libmachine: (pause-865564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:8f:50", ip: ""} in network mk-pause-865564: {Iface:virbr4 ExpiryTime:2025-02-14 22:46:26 +0000 UTC Type:0 Mac:52:54:00:7c:8f:50 Iaid: IPaddr:192.168.72.173 Prefix:24 Hostname:pause-865564 Clientid:01:52:54:00:7c:8f:50}
	I0214 21:47:57.676607  285551 main.go:141] libmachine: (pause-865564) DBG | domain pause-865564 has defined IP address 192.168.72.173 and MAC address 52:54:00:7c:8f:50 in network mk-pause-865564
	I0214 21:47:57.676769  285551 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0214 21:47:57.682025  285551 kubeadm.go:875] updating cluster {Name:pause-865564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-865564 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secu
rity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:47:57.682212  285551 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 21:47:57.682268  285551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:47:57.735515  285551 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:47:57.735542  285551 crio.go:433] Images already preloaded, skipping extraction
	I0214 21:47:57.735604  285551 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:47:57.779758  285551 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 21:47:57.779787  285551 cache_images.go:84] Images are preloaded, skipping loading
	I0214 21:47:57.779798  285551 kubeadm.go:926] updating node { 192.168.72.173 8443 v1.32.1 crio true true} ...
	I0214 21:47:57.779925  285551 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-865564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:pause-865564 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:47:57.779992  285551 ssh_runner.go:195] Run: crio config
	I0214 21:47:57.826834  285551 cni.go:84] Creating CNI manager for ""
	I0214 21:47:57.826866  285551 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:47:57.826881  285551 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:47:57.826917  285551 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.173 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-865564 NodeName:pause-865564 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 21:47:57.827123  285551 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-865564"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.173"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.173"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:47:57.827212  285551 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 21:47:57.837998  285551 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:47:57.838067  285551 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:47:57.847998  285551 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0214 21:47:57.866999  285551 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:47:57.885307  285551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0214 21:47:57.906453  285551 ssh_runner.go:195] Run: grep 192.168.72.173	control-plane.minikube.internal$ /etc/hosts
	I0214 21:47:57.911205  285551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:47:58.076428  285551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:47:58.093343  285551 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564 for IP: 192.168.72.173
	I0214 21:47:58.093366  285551 certs.go:194] generating shared ca certs ...
	I0214 21:47:58.093388  285551 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:47:58.093574  285551 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:47:58.093649  285551 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:47:58.093664  285551 certs.go:256] generating profile certs ...
	I0214 21:47:58.093780  285551 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.key
	I0214 21:47:58.093872  285551 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/apiserver.key.b09fd9fb
	I0214 21:47:58.093932  285551 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/proxy-client.key
	I0214 21:47:58.094097  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:47:58.094142  285551 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:47:58.094154  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:47:58.094193  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:47:58.094229  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:47:58.094260  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:47:58.094320  285551 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:47:58.095279  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:47:58.125072  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:47:58.157556  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:47:58.187116  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:47:58.210682  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 21:47:58.239070  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 21:47:58.266757  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:47:58.296100  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 21:47:58.324640  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:47:58.351360  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:47:58.377681  285551 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:47:58.401284  285551 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:47:58.419796  285551 ssh_runner.go:195] Run: openssl version
	I0214 21:47:58.431005  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:47:58.448124  285551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:47:58.466498  285551 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:47:58.466548  285551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:47:58.494131  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:47:58.521104  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:47:58.541999  285551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:47:58.554979  285551 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:47:58.555037  285551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:47:58.569395  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:47:58.628119  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:47:58.652991  285551 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:47:58.687273  285551 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:47:58.687347  285551 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:47:58.712750  285551 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:47:58.768868  285551 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:47:58.788732  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 21:47:58.831427  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 21:47:58.869607  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 21:47:58.889040  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 21:47:58.907764  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 21:47:58.982590  285551 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 21:47:59.007112  285551 kubeadm.go:392] StartCluster: {Name:pause-865564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-865564 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:47:59.007271  285551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:47:59.007372  285551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:47:59.090719  285551 cri.go:89] found id: "62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764"
	I0214 21:47:59.090753  285551 cri.go:89] found id: "06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516"
	I0214 21:47:59.090760  285551 cri.go:89] found id: "b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5"
	I0214 21:47:59.090766  285551 cri.go:89] found id: "2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f"
	I0214 21:47:59.090771  285551 cri.go:89] found id: "34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464"
	I0214 21:47:59.090775  285551 cri.go:89] found id: "75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e"
	I0214 21:47:59.090780  285551 cri.go:89] found id: "a31b81b2f481e957eadc1c2a375d5d6bc26b3f2be474c088001d628617b6ce08"
	I0214 21:47:59.090784  285551 cri.go:89] found id: ""
	I0214 21:47:59.090842  285551 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-865564 -n pause-865564
I0214 21:48:19.689613  250783 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0214 21:48:19.689711  250783 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0214 21:48:19.730866  250783 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0214 21:48:19.730906  250783 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0214 21:48:19.730995  250783 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0214 21:48:19.731028  250783 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1425863628/002/docker-machine-driver-kvm2
I0214 21:48:19.755539  250783 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1425863628/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840] Decompressors:map[bz2:0xc0006043f8 gz:0xc000604480 tar:0xc000604430 tar.bz2:0xc000604440 tar.gz:0xc000604450 tar.xz:0xc000604460 tar.zst:0xc000604470 tbz2:0xc000604440 tgz:0xc000604450 txz:0xc000604460 tzst:0xc000604470 xz:0xc000604488 zip:0xc0006044a0 zst:0xc0006044b0] Getters:map[file:0xc001b9e1d0 http:0xc0007a82d0 https:0xc0007a8320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0214 21:48:19.755592  250783 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1425863628/002/docker-machine-driver-kvm2
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-865564 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-865564 logs -n 25: (1.406699345s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |       Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status kubelet --all                       |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat kubelet                                |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status docker --all                        |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat docker                                 |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/docker/daemon.json                              |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo docker                         | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | system info                                          |                     |         |         |                     |                     |
	| start   | -p NoKubernetes-201553                               | NoKubernetes-201553 | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                     |         |         |                     |                     |
	|         | --container-runtime=crio                             |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status cri-docker                          |                     |         |         |                     |                     |
	|         | --all --full --no-pager                              |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat cri-docker                             |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | cri-dockerd --version                                |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status containerd                          |                     |         |         |                     |                     |
	|         | --all --full --no-pager                              |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat containerd                             |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/containerd/config.toml                          |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | containerd config dump                               |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status crio --all                          |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo find                           | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                     |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                     |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo crio                           | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | config                                               |                     |         |         |                     |                     |
	| delete  | -p cilium-266997                                     | cilium-266997       | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:48 UTC |
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:48:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:48:18.057387  287882 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:48:18.057681  287882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:18.057687  287882 out.go:358] Setting ErrFile to fd 2...
	I0214 21:48:18.057693  287882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:18.057940  287882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:48:18.058561  287882 out.go:352] Setting JSON to false
	I0214 21:48:18.059803  287882 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9042,"bootTime":1739560656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:48:18.059918  287882 start.go:140] virtualization: kvm guest
	I0214 21:48:18.061559  287882 out.go:177] * [NoKubernetes-201553] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:48:18.063144  287882 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:48:18.063148  287882 notify.go:220] Checking for updates...
	I0214 21:48:18.064253  287882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:48:18.065362  287882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:48:18.066634  287882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:48:18.067727  287882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:48:18.069007  287882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:48:18.070479  287882 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:48:18.070607  287882 config.go:182] Loaded profile config "pause-865564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:48:18.070644  287882 start.go:1882] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0214 21:48:18.070734  287882 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:48:18.107488  287882 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:48:18.108569  287882 start.go:304] selected driver: kvm2
	I0214 21:48:18.108578  287882 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:48:18.108589  287882 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:48:18.108943  287882 start.go:1882] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0214 21:48:18.109004  287882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:48:18.109079  287882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:48:18.128631  287882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:48:18.128678  287882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:48:18.129392  287882 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0214 21:48:18.129573  287882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:48:18.129597  287882 cni.go:84] Creating CNI manager for ""
	I0214 21:48:18.129656  287882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:48:18.129662  287882 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 21:48:18.129681  287882 start.go:1882] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0214 21:48:18.129736  287882 start.go:347] cluster config:
	{Name:NoKubernetes-201553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-201553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:48:18.129851  287882 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:48:18.131339  287882 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-201553
	I0214 21:48:18.133037  287882 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime crio
	W0214 21:48:18.158281  287882 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0214 21:48:18.158396  287882 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/config.json ...
	I0214 21:48:18.158427  287882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/config.json: {Name:mk7e06661eb8cc6792f217ee6439a22b40f90711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:48:18.158584  287882 start.go:360] acquireMachinesLock for NoKubernetes-201553: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:48:18.158652  287882 start.go:364] duration metric: took 51.623µs to acquireMachinesLock for "NoKubernetes-201553"
	I0214 21:48:18.158668  287882 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-201553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 Clus
terName:NoKubernetes-201553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:48:18.158757  287882 start.go:125] createHost starting for "" (driver="kvm2")
	W0214 21:48:14.763238  285551 pod_ready.go:104] pod "etcd-pause-865564" is not "Ready", error: <nil>
	I0214 21:48:16.763301  285551 pod_ready.go:94] pod "etcd-pause-865564" is "Ready"
	I0214 21:48:16.763325  285551 pod_ready.go:86] duration metric: took 4.006605421s for pod "etcd-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:16.765839  285551 pod_ready.go:83] waiting for pod "kube-apiserver-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.272938  285551 pod_ready.go:94] pod "kube-apiserver-pause-865564" is "Ready"
	I0214 21:48:17.272967  285551 pod_ready.go:86] duration metric: took 507.103436ms for pod "kube-apiserver-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.275516  285551 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.280349  285551 pod_ready.go:94] pod "kube-controller-manager-pause-865564" is "Ready"
	I0214 21:48:17.280374  285551 pod_ready.go:86] duration metric: took 4.825546ms for pod "kube-controller-manager-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.282712  285551 pod_ready.go:83] waiting for pod "kube-proxy-ctmk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.361507  285551 pod_ready.go:94] pod "kube-proxy-ctmk4" is "Ready"
	I0214 21:48:17.361533  285551 pod_ready.go:86] duration metric: took 78.801115ms for pod "kube-proxy-ctmk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:17.561283  285551 pod_ready.go:83] waiting for pod "kube-scheduler-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:19.566399  285551 pod_ready.go:94] pod "kube-scheduler-pause-865564" is "Ready"
	I0214 21:48:19.566421  285551 pod_ready.go:86] duration metric: took 2.005118126s for pod "kube-scheduler-pause-865564" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 21:48:19.566433  285551 pod_ready.go:40] duration metric: took 11.825016399s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 21:48:19.614109  285551 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 21:48:19.615623  285551 out.go:177] * Done! kubectl is now configured to use "pause-865564" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.392178323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569700392159705,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82774a77-eefd-4359-b5b2-e7a2c6b4520f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.392866562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe20b1da-b13b-4ebd-8855-3ddbccdd5db8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.392977636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe20b1da-b13b-4ebd-8855-3ddbccdd5db8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.393446418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe20b1da-b13b-4ebd-8855-3ddbccdd5db8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.437363551Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d80dfb07-531a-41c1-bfdc-6dba73981a76 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.437466267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d80dfb07-531a-41c1-bfdc-6dba73981a76 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.438370152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf6b4ae4-d6e8-40e8-9d4b-6ecc0a16b925 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.438694935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569700438678284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf6b4ae4-d6e8-40e8-9d4b-6ecc0a16b925 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.439309644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1027503-bfdb-43b3-b9fa-abb4d59b3ffe name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.439357269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1027503-bfdb-43b3-b9fa-abb4d59b3ffe name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.439566529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1027503-bfdb-43b3-b9fa-abb4d59b3ffe name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.487177338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71907c7f-e58a-4226-9bb8-5f44d7d26099 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.487284995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71907c7f-e58a-4226-9bb8-5f44d7d26099 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.489221245Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7767794e-3f0d-498f-b663-6a78bc57e8ff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.490668634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569700490027605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7767794e-3f0d-498f-b663-6a78bc57e8ff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.491536476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=733213d7-990c-4d50-9535-4a8883d7e8dc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.491737924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=733213d7-990c-4d50-9535-4a8883d7e8dc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.492801255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=733213d7-990c-4d50-9535-4a8883d7e8dc name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.537186499Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5065d84e-6741-413c-a5e1-902469b9b54f name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.537291673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5065d84e-6741-413c-a5e1-902469b9b54f name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.538378823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=877bd0e7-68cb-4a9e-b903-4f1602f91368 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.538714818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569700538697351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=877bd0e7-68cb-4a9e-b903-4f1602f91368 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.539223494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7620eda-f453-431a-b40b-34670cfd47c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.539271806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7620eda-f453-431a-b40b-34670cfd47c4 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:20 pause-865564 crio[2865]: time="2025-02-14 21:48:20.539523517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7620eda-f453-431a-b40b-34670cfd47c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2f53a3f99f0a4       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   14 seconds ago       Running             kube-proxy                2                   7b7332eb0995e       kube-proxy-ctmk4
	1e305908c290f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago       Running             coredns                   1                   c630a13a16b7b       coredns-668d6bf9bc-b7lr7
	175c27162d025       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   18 seconds ago       Running             kube-apiserver            2                   acf161ed772a2       kube-apiserver-pause-865564
	60fdf4e42954b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago       Running             etcd                      2                   57c7defeba451       etcd-pause-865564
	57abf002fc6e4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   18 seconds ago       Running             kube-scheduler            2                   42832c38830e4       kube-scheduler-pause-865564
	a4eae03b6f655       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   21 seconds ago       Running             kube-controller-manager   1                   63d5c080cbcd8       kube-controller-manager-pause-865564
	62ab3dcff1037       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   23 seconds ago       Exited              kube-proxy                1                   aac7e9ad9fb12       kube-proxy-ctmk4
	06a0792798b77       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   24 seconds ago       Exited              kube-apiserver            1                   16a1cd93512ab       kube-apiserver-pause-865564
	b7a619fcf72a4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   24 seconds ago       Exited              kube-scheduler            1                   dcaf1ee818799       kube-scheduler-pause-865564
	2e3bb892329b7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago       Exited              etcd                      1                   a0ab97e3f8900       etcd-pause-865564
	34e6f11b1e6a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   fc01c16f7f5d0       coredns-668d6bf9bc-b7lr7
	75937c9e58c28       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   About a minute ago   Exited              kube-controller-manager   0                   c87efa4e3877f       kube-controller-manager-pause-865564
	
	
	==> coredns [1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50652 - 41904 "HINFO IN 8347356247982463606.903614856350868915. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006817459s
	
	
	==> coredns [34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2142421356]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.669) (total time: 30003ms):
	Trace[2142421356]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (21:47:27.672)
	Trace[2142421356]: [30.003479159s] [30.003479159s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[934824860]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.670) (total time: 30002ms):
	Trace[934824860]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (21:47:27.672)
	Trace[934824860]: [30.002462533s] [30.002462533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819428949]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.672) (total time: 30000ms):
	Trace[819428949]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:47:27.673)
	Trace[819428949]: [30.000663318s] [30.000663318s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57656 - 50868 "HINFO IN 2807802112623673126.529389337223698941. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00856425s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-865564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-865564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=pause-865564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_46_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:46:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-865564
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:48:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.173
	  Hostname:    pause-865564
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8648f20dcef411d883f35034d09d3c1
	  System UUID:                b8648f20-dcef-411d-883f-35034d09d3c1
	  Boot ID:                    0b2ba9b8-db45-4f6e-94c7-914d37d8ab3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-b7lr7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-pause-865564                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-865564             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-865564    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-ctmk4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-pause-865564             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x8 over 96s)  kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 96s)  kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x7 over 96s)  kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeReady                88s                kubelet          Node pause-865564 status is now: NodeReady
	  Normal  RegisteredNode           86s                node-controller  Node pause-865564 event: Registered Node pause-865564 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-865564 event: Registered Node pause-865564 in Controller
	
	
	==> dmesg <==
	[  +7.434047] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066183] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.218482] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.160149] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.403771] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.413422] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.063280] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.903801] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +1.310685] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.287724] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.084758] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.777620] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.781228] kauditd_printk_skb: 46 callbacks suppressed
	[Feb14 21:47] kauditd_printk_skb: 69 callbacks suppressed
	[ +47.764591] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.133661] systemd-fstab-generator[2353]: Ignoring "noauto" option for root device
	[  +0.178862] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.169399] systemd-fstab-generator[2462]: Ignoring "noauto" option for root device
	[  +0.514096] systemd-fstab-generator[2721]: Ignoring "noauto" option for root device
	[  +1.421585] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[Feb14 21:48] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +0.080091] kauditd_printk_skb: 237 callbacks suppressed
	[  +6.128911] systemd-fstab-generator[3983]: Ignoring "noauto" option for root device
	[  +0.104711] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f] <==
	{"level":"warn","ts":"2025-02-14T21:47:56.760903Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-02-14T21:47:56.761329Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.72.173:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.72.173:2380","--initial-cluster=pause-865564=https://192.168.72.173:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.72.173:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.72.173:2380","--name=pause-865564","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2025-02-14T21:47:56.761793Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-02-14T21:47:56.761859Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-02-14T21:47:56.761939Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.72.173:2380"]}
	{"level":"info","ts":"2025-02-14T21:47:56.761988Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:47:56.762574Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"]}
	{"level":"info","ts":"2025-02-14T21:47:56.762702Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-865564","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.72.173:2380"],"listen-peer-urls":["https://192.168.72.173:2380"],"advertise-client-urls":["https://192.168.72.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clust
er-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2025-02-14T21:47:56.771325Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"8.424661ms"}
	{"level":"info","ts":"2025-02-14T21:47:56.781253Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-02-14T21:47:56.787572Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","commit-index":455}
	{"level":"info","ts":"2025-02-14T21:47:56.787648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b switched to configuration voters=()"}
	{"level":"info","ts":"2025-02-14T21:47:56.787679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became follower at term 2"}
	{"level":"info","ts":"2025-02-14T21:47:56.787688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 994888ec67f4fa0b [peers: [], term: 2, commit: 455, applied: 0, lastindex: 455, lastterm: 2]"}
	
	
	==> etcd [60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e] <==
	{"level":"info","ts":"2025-02-14T21:48:02.372010Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","added-peer-id":"994888ec67f4fa0b","added-peer-peer-urls":["https://192.168.72.173:2380"]}
	{"level":"info","ts":"2025-02-14T21:48:02.372166Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:48:02.372756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:48:02.373729Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:48:02.374083Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"994888ec67f4fa0b","initial-advertise-peer-urls":["https://192.168.72.173:2380"],"listen-peer-urls":["https://192.168.72.173:2380"],"advertise-client-urls":["https://192.168.72.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-14T21:48:02.374218Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-14T21:48:02.371327Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-14T21:48:02.375009Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.173:2380"}
	{"level":"info","ts":"2025-02-14T21:48:02.375041Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.173:2380"}
	{"level":"info","ts":"2025-02-14T21:48:04.248586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b received MsgPreVoteResp from 994888ec67f4fa0b at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became candidate at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b received MsgVoteResp from 994888ec67f4fa0b at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became leader at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 994888ec67f4fa0b elected leader 994888ec67f4fa0b at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.253701Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"994888ec67f4fa0b","local-member-attributes":"{Name:pause-865564 ClientURLs:[https://192.168.72.173:2379]}","request-path":"/0/members/994888ec67f4fa0b/attributes","cluster-id":"37dfb6762fe83f6","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:48:04.253721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:48:04.253954Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:48:04.253995Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:48:04.253749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:48:04.254536Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:48:04.254572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:48:04.255312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.173:2379"}
	{"level":"info","ts":"2025-02-14T21:48:04.255577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:48:20 up 2 min,  0 users,  load average: 0.57, 0.22, 0.08
	Linux pause-865564 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516] <==
	
	
	==> kube-apiserver [175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc] <==
	I0214 21:48:05.539035       1 autoregister_controller.go:144] Starting autoregister controller
	I0214 21:48:05.539061       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 21:48:05.539082       1 cache.go:39] Caches are synced for autoregister controller
	I0214 21:48:05.578488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0214 21:48:05.585776       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0214 21:48:05.591191       1 policy_source.go:240] refreshing policies
	I0214 21:48:05.619547       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0214 21:48:05.619798       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0214 21:48:05.619834       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0214 21:48:05.621664       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0214 21:48:05.622565       1 shared_informer.go:320] Caches are synced for configmaps
	I0214 21:48:05.622641       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0214 21:48:05.622739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 21:48:05.629489       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0214 21:48:05.631988       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0214 21:48:05.632976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 21:48:05.643062       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 21:48:06.423565       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 21:48:07.324267       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0214 21:48:07.374297       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0214 21:48:07.404823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 21:48:07.410815       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 21:48:08.701017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0214 21:48:08.752333       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 21:48:08.800755       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e] <==
	I0214 21:46:55.033272       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:46:55.037891       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:46:55.051440       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-865564" podCIDRs=["10.244.0.0/24"]
	I0214 21:46:55.051467       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.051490       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.085268       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:46:55.085349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0214 21:46:55.085374       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0214 21:46:55.192160       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.953646       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:56.186880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="535.380912ms"
	I0214 21:46:56.250643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.667363ms"
	I0214 21:46:56.251179       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="300.607µs"
	I0214 21:46:56.279210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="161.545µs"
	I0214 21:46:56.666710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.934231ms"
	I0214 21:46:56.679769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.900677ms"
	I0214 21:46:56.680500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="129.881µs"
	I0214 21:46:58.136885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.312µs"
	I0214 21:46:58.161522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="200.492µs"
	I0214 21:47:02.362911       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:47:07.863924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.555µs"
	I0214 21:47:08.198305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="68.641µs"
	I0214 21:47:08.204402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="126.468µs"
	I0214 21:47:36.372180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.944132ms"
	I0214 21:47:36.372798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.846µs"
	
	
	==> kube-controller-manager [a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449] <==
	I0214 21:48:08.387152       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-865564"
	I0214 21:48:08.387187       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0214 21:48:08.389619       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0214 21:48:08.391286       1 shared_informer.go:320] Caches are synced for PV protection
	I0214 21:48:08.398494       1 shared_informer.go:320] Caches are synced for HPA
	I0214 21:48:08.399575       1 shared_informer.go:320] Caches are synced for endpoint
	I0214 21:48:08.399678       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:48:08.399721       1 shared_informer.go:320] Caches are synced for ephemeral
	I0214 21:48:08.401219       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0214 21:48:08.401300       1 shared_informer.go:320] Caches are synced for GC
	I0214 21:48:08.406310       1 shared_informer.go:320] Caches are synced for node
	I0214 21:48:08.406381       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0214 21:48:08.406444       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0214 21:48:08.406452       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0214 21:48:08.406460       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0214 21:48:08.406521       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:48:08.412245       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:48:08.413516       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0214 21:48:08.418024       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0214 21:48:08.430487       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:48:08.439082       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:48:08.708510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="331.775662ms"
	I0214 21:48:08.709052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.277µs"
	I0214 21:48:12.454745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.871953ms"
	I0214 21:48:12.455423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.683µs"
	
	
	==> kube-proxy [2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0214 21:48:06.105348       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0214 21:48:06.121095       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.173"]
	E0214 21:48:06.121291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 21:48:06.167485       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0214 21:48:06.167561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0214 21:48:06.167602       1 server_linux.go:170] "Using iptables Proxier"
	I0214 21:48:06.171005       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 21:48:06.171393       1 server.go:497] "Version info" version="v1.32.1"
	I0214 21:48:06.171438       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:48:06.172852       1 config.go:199] "Starting service config controller"
	I0214 21:48:06.172918       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 21:48:06.172974       1 config.go:105] "Starting endpoint slice config controller"
	I0214 21:48:06.172991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 21:48:06.173762       1 config.go:329] "Starting node config controller"
	I0214 21:48:06.173845       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 21:48:06.273821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0214 21:48:06.273977       1 shared_informer.go:320] Caches are synced for service config
	I0214 21:48:06.274464       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764] <==
	
	
	==> kube-scheduler [57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479] <==
	I0214 21:48:03.353448       1 serving.go:386] Generated self-signed cert in-memory
	W0214 21:48:05.473092       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 21:48:05.473518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 21:48:05.473619       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 21:48:05.473651       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 21:48:05.570281       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 21:48:05.570370       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:48:05.572890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 21:48:05.572933       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:48:05.573518       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 21:48:05.573614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 21:48:05.673360       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5] <==
	
	
	==> kubelet <==
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.185194    3582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-865564?timeout=10s\": dial tcp 192.168.72.173:8443: connect: connection refused" interval="800ms"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: I0214 21:48:02.390583    3582 kubelet_node_status.go:76] "Attempting to register node" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.723859    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.730091    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.741579    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.745303    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.753869    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.754332    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.754698    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.758305    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.758888    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.759361    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.545239    3582 apiserver.go:52] "Watching apiserver"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.567349    3582 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.627966    3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf00b685-019b-4e7f-a4eb-68ea96a926fa-lib-modules\") pod \"kube-proxy-ctmk4\" (UID: \"bf00b685-019b-4e7f-a4eb-68ea96a926fa\") " pod="kube-system/kube-proxy-ctmk4"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.628190    3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf00b685-019b-4e7f-a4eb-68ea96a926fa-xtables-lock\") pod \"kube-proxy-ctmk4\" (UID: \"bf00b685-019b-4e7f-a4eb-68ea96a926fa\") " pod="kube-system/kube-proxy-ctmk4"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680478    3582 kubelet_node_status.go:125] "Node was previously registered" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680530    3582 kubelet_node_status.go:79] "Successfully registered node" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680550    3582 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.681462    3582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.849261    3582 scope.go:117] "RemoveContainer" containerID="34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.849498    3582 scope.go:117] "RemoveContainer" containerID="62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764"
	Feb 14 21:48:11 pause-865564 kubelet[3582]: E0214 21:48:11.691675    3582 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569691690871949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:48:11 pause-865564 kubelet[3582]: E0214 21:48:11.692255    3582 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569691690871949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:48:12 pause-865564 kubelet[3582]: I0214 21:48:12.419208    3582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-865564 -n pause-865564
helpers_test.go:261: (dbg) Run:  kubectl --context pause-865564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-865564 -n pause-865564
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-865564 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-865564 logs -n 25: (1.220391435s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat kubelet                                |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status docker --all                        |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat docker                                 |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/docker/daemon.json                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo docker                         | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | system info                                          |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-201553                               | NoKubernetes-201553      | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status cri-docker                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo cat                            | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo                                | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo find                           | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-266997 sudo crio                           | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-266997                                     | cilium-266997            | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC | 14 Feb 25 21:48 UTC |
	| start   | -p force-systemd-env-054462                          | force-systemd-env-054462 | jenkins | v1.35.0 | 14 Feb 25 21:48 UTC |                     |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 21:48:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 21:48:20.307345  288327 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:48:20.307521  288327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:20.307535  288327 out.go:358] Setting ErrFile to fd 2...
	I0214 21:48:20.307541  288327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:20.307809  288327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:48:20.308611  288327 out.go:352] Setting JSON to false
	I0214 21:48:20.309976  288327 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9044,"bootTime":1739560656,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:48:20.310114  288327 start.go:140] virtualization: kvm guest
	I0214 21:48:20.312332  288327 out.go:177] * [force-systemd-env-054462] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:48:20.313734  288327 notify.go:220] Checking for updates...
	I0214 21:48:20.313747  288327 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:48:20.315082  288327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:48:20.316220  288327 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:48:20.317377  288327 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:48:20.318486  288327 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:48:20.319580  288327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0214 21:48:20.321153  288327 config.go:182] Loaded profile config "NoKubernetes-201553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0214 21:48:20.321307  288327 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:48:20.321468  288327 config.go:182] Loaded profile config "pause-865564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:48:20.321610  288327 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:48:20.359319  288327 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:48:20.360414  288327 start.go:304] selected driver: kvm2
	I0214 21:48:20.360428  288327 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:48:20.360442  288327 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:48:20.361368  288327 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:48:20.361477  288327 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:48:20.377060  288327 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:48:20.377110  288327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:48:20.377361  288327 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 21:48:20.377393  288327 cni.go:84] Creating CNI manager for ""
	I0214 21:48:20.377452  288327 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:48:20.377468  288327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 21:48:20.377545  288327 start.go:347] cluster config:
	{Name:force-systemd-env-054462 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-054462 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:48:20.377701  288327 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:48:20.379288  288327 out.go:177] * Starting "force-systemd-env-054462" primary control-plane node in "force-systemd-env-054462" cluster
	
	
	==> CRI-O <==
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.301420109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569702301398819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4adcc35a-3a54-41a4-a857-6f1b35109e47 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.301980258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7a62eab-3c0c-4c52-8590-b19347dd137c name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.302047468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7a62eab-3c0c-4c52-8590-b19347dd137c name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.302324711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7a62eab-3c0c-4c52-8590-b19347dd137c name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.340376001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d33ae62-7e87-41b0-b2b9-0721f493c59d name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.340457094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d33ae62-7e87-41b0-b2b9-0721f493c59d name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.341304748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d735552-7be4-48a6-a460-993342646f7b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.341642185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569702341622535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d735552-7be4-48a6-a460-993342646f7b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.342002941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d6cb7e5-b7fb-40f9-837e-517095412a83 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.342068466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d6cb7e5-b7fb-40f9-837e-517095412a83 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.342332316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d6cb7e5-b7fb-40f9-837e-517095412a83 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.384141464Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e7e04b9-7ec8-4fb8-a0d9-78c77cf343e7 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.384219930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e7e04b9-7ec8-4fb8-a0d9-78c77cf343e7 name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.385351694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7eb38a7-02e5-4a1a-acee-b25fe6a0113d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.385681863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569702385663464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7eb38a7-02e5-4a1a-acee-b25fe6a0113d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.386594941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1743e36a-d0c1-4e5b-a922-5757be082393 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.386661443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1743e36a-d0c1-4e5b-a922-5757be082393 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.386891055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1743e36a-d0c1-4e5b-a922-5757be082393 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.433022493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=613010c5-bd32-4c35-9e56-156cbb026f1a name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.433169483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=613010c5-bd32-4c35-9e56-156cbb026f1a name=/runtime.v1.RuntimeService/Version
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.434083755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed3b01c6-b383-4408-b5ae-f716a81753f7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.434518290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569702434500117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed3b01c6-b383-4408-b5ae-f716a81753f7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.435015859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1decaa1-c61b-4a48-9fb0-4dab3852204d name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.435071473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1decaa1-c61b-4a48-9fb0-4dab3852204d name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 21:48:22 pause-865564 crio[2865]: time="2025-02-14 21:48:22.435363316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42,PodSandboxId:7b7332eb0995e1402bdff5fe1cb70aa92251890c48823e435add0a83e7ecda25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739569685874795633,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027,PodSandboxId:c630a13a16b7b35b897626660aaba4162cca5889e136181704f2b356b2ee6b27,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739569685867612783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\"
,\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc,PodSandboxId:acf161ed772a29422f747b791183a970c7c1ac8d8c5ad9ded826747e657c56d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739569682071623139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9f
c32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e,PodSandboxId:57c7defeba451f81767a02ba11cf386fb8cc0bc570822c2fc5d7c6ba05a9acdf,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739569682070420894,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]
string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479,PodSandboxId:42832c38830e43051dfedd030530a50ad1fdd94fcb8cc2faf6902a51b472879a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739569682058879023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernet
es.container.hash: 29f857d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449,PodSandboxId:63d5c080cbcd873b6f3799000177b496daff9344115bfcaf0f5cdb67736d797b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739569679540050383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io
.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764,PodSandboxId:aac7e9ad9fb1241dc7038ce0558e7caa8920a2e7482ef03a015d62d27a374841,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_EXITED,CreatedAt:1739569676685258119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ctmk4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf00b685-019b-4e7f-a4eb-68ea96a926fa,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b29
3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516,PodSandboxId:16a1cd93512ab24459b1901d9fcfb2a7ff25c7fbb842282483ef02d847d67085,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_EXITED,CreatedAt:1739569676592941169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af6a876c9fc32936346956e5e13065ff,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5,PodSandboxId:dcaf1ee8187992b93e0cb6a81fd19bd7a6e6495cd2fd4fe964cc611b64b0f307,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_EXITED,CreatedAt:1739569676429871915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35eeb7771ee99a081e5b3f0b1b7ce266,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f,PodSandboxId:a0ab97e3f8900bd985f0cc3b0b2884deac7e0ef2c6ec7d8d45cd10b4d0f0708c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1739569676251661560,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 264c95a5a0673fe78f4cb3adcf860656,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464,PodSandboxId:fc01c16f7f5d02e41f0011779145a8d53de89cce9a8de95bb6c6d5e10b41a1b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1739569617407950383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b7lr7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 118f181c-f6e5-44ab-8fbc-04f91d097136,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"
containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e,PodSandboxId:c87efa4e3877f977f4b3a2389277c89c08a7e4d1791ec36823424b7925ae9458,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_EXITED,CreatedAt:1739569605656302857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-865564,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 695040d7b744326398b4570bc7fddeb8,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1decaa1-c61b-4a48-9fb0-4dab3852204d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2f53a3f99f0a4       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   16 seconds ago       Running             kube-proxy                2                   7b7332eb0995e       kube-proxy-ctmk4
	1e305908c290f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago       Running             coredns                   1                   c630a13a16b7b       coredns-668d6bf9bc-b7lr7
	175c27162d025       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   20 seconds ago       Running             kube-apiserver            2                   acf161ed772a2       kube-apiserver-pause-865564
	60fdf4e42954b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago       Running             etcd                      2                   57c7defeba451       etcd-pause-865564
	57abf002fc6e4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   20 seconds ago       Running             kube-scheduler            2                   42832c38830e4       kube-scheduler-pause-865564
	a4eae03b6f655       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   22 seconds ago       Running             kube-controller-manager   1                   63d5c080cbcd8       kube-controller-manager-pause-865564
	62ab3dcff1037       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a   25 seconds ago       Exited              kube-proxy                1                   aac7e9ad9fb12       kube-proxy-ctmk4
	06a0792798b77       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a   25 seconds ago       Exited              kube-apiserver            1                   16a1cd93512ab       kube-apiserver-pause-865564
	b7a619fcf72a4       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1   26 seconds ago       Exited              kube-scheduler            1                   dcaf1ee818799       kube-scheduler-pause-865564
	2e3bb892329b7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   26 seconds ago       Exited              etcd                      1                   a0ab97e3f8900       etcd-pause-865564
	34e6f11b1e6a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   fc01c16f7f5d0       coredns-668d6bf9bc-b7lr7
	75937c9e58c28       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35   About a minute ago   Exited              kube-controller-manager   0                   c87efa4e3877f       kube-controller-manager-pause-865564
	
	
	==> coredns [1e305908c290f845867dcdef5dd0c4895719b67ec1c6ff90f1997a57e3760027] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50652 - 41904 "HINFO IN 8347356247982463606.903614856350868915. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006817459s
	
	
	==> coredns [34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2142421356]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.669) (total time: 30003ms):
	Trace[2142421356]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (21:47:27.672)
	Trace[2142421356]: [30.003479159s] [30.003479159s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[934824860]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.670) (total time: 30002ms):
	Trace[934824860]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (21:47:27.672)
	Trace[934824860]: [30.002462533s] [30.002462533s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[819428949]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Feb-2025 21:46:57.672) (total time: 30000ms):
	Trace[819428949]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (21:47:27.673)
	Trace[819428949]: [30.000663318s] [30.000663318s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57656 - 50868 "HINFO IN 2807802112623673126.529389337223698941. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00856425s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-865564
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-865564
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a
	                    minikube.k8s.io/name=pause-865564
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_14T21_46_51_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 14 Feb 2025 21:46:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-865564
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 14 Feb 2025 21:48:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 14 Feb 2025 21:48:05 +0000   Fri, 14 Feb 2025 21:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.173
	  Hostname:    pause-865564
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8648f20dcef411d883f35034d09d3c1
	  System UUID:                b8648f20-dcef-411d-883f-35034d09d3c1
	  Boot ID:                    0b2ba9b8-db45-4f6e-94c7-914d37d8ab3f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-b7lr7                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-pause-865564                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         91s
	  kube-system                 kube-apiserver-pause-865564             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-865564    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-ctmk4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-865564             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientMemory  97s (x8 over 98s)  kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 98s)  kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x7 over 98s)  kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     91s                kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeReady                90s                kubelet          Node pause-865564 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node pause-865564 event: Registered Node pause-865564 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-865564 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-865564 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-865564 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-865564 event: Registered Node pause-865564 in Controller
	
	
	==> dmesg <==
	[  +7.434047] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.061472] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066183] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.218482] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.160149] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.403771] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.413422] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.063280] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.903801] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +1.310685] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.287724] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.084758] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.777620] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.781228] kauditd_printk_skb: 46 callbacks suppressed
	[Feb14 21:47] kauditd_printk_skb: 69 callbacks suppressed
	[ +47.764591] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.133661] systemd-fstab-generator[2353]: Ignoring "noauto" option for root device
	[  +0.178862] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.169399] systemd-fstab-generator[2462]: Ignoring "noauto" option for root device
	[  +0.514096] systemd-fstab-generator[2721]: Ignoring "noauto" option for root device
	[  +1.421585] systemd-fstab-generator[3072]: Ignoring "noauto" option for root device
	[Feb14 21:48] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +0.080091] kauditd_printk_skb: 237 callbacks suppressed
	[  +6.128911] systemd-fstab-generator[3983]: Ignoring "noauto" option for root device
	[  +0.104711] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [2e3bb892329b7135677f3a55704f4ec12d1c1b35f42f5b3970f60f7c4552dc3f] <==
	{"level":"warn","ts":"2025-02-14T21:47:56.760903Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-02-14T21:47:56.761329Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.72.173:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.72.173:2380","--initial-cluster=pause-865564=https://192.168.72.173:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.72.173:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.72.173:2380","--name=pause-865564","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2025-02-14T21:47:56.761793Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2025-02-14T21:47:56.761859Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-02-14T21:47:56.761939Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.72.173:2380"]}
	{"level":"info","ts":"2025-02-14T21:47:56.761988Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:47:56.762574Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"]}
	{"level":"info","ts":"2025-02-14T21:47:56.762702Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-865564","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.72.173:2380"],"listen-peer-urls":["https://192.168.72.173:2380"],"advertise-client-urls":["https://192.168.72.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clust
er-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2025-02-14T21:47:56.771325Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"8.424661ms"}
	{"level":"info","ts":"2025-02-14T21:47:56.781253Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-02-14T21:47:56.787572Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","commit-index":455}
	{"level":"info","ts":"2025-02-14T21:47:56.787648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b switched to configuration voters=()"}
	{"level":"info","ts":"2025-02-14T21:47:56.787679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became follower at term 2"}
	{"level":"info","ts":"2025-02-14T21:47:56.787688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 994888ec67f4fa0b [peers: [], term: 2, commit: 455, applied: 0, lastindex: 455, lastterm: 2]"}
	
	
	==> etcd [60fdf4e42954b7515c65a7d919821cc471d1cfc49ea648e9604470d22582139e] <==
	{"level":"info","ts":"2025-02-14T21:48:02.372010Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","added-peer-id":"994888ec67f4fa0b","added-peer-peer-urls":["https://192.168.72.173:2380"]}
	{"level":"info","ts":"2025-02-14T21:48:02.372166Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"37dfb6762fe83f6","local-member-id":"994888ec67f4fa0b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:48:02.372756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-14T21:48:02.373729Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-14T21:48:02.374083Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"994888ec67f4fa0b","initial-advertise-peer-urls":["https://192.168.72.173:2380"],"listen-peer-urls":["https://192.168.72.173:2380"],"advertise-client-urls":["https://192.168.72.173:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.173:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-14T21:48:02.374218Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-14T21:48:02.371327Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-14T21:48:02.375009Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.173:2380"}
	{"level":"info","ts":"2025-02-14T21:48:02.375041Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.173:2380"}
	{"level":"info","ts":"2025-02-14T21:48:04.248586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b received MsgPreVoteResp from 994888ec67f4fa0b at term 2"}
	{"level":"info","ts":"2025-02-14T21:48:04.248873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became candidate at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b received MsgVoteResp from 994888ec67f4fa0b at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"994888ec67f4fa0b became leader at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.248960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 994888ec67f4fa0b elected leader 994888ec67f4fa0b at term 3"}
	{"level":"info","ts":"2025-02-14T21:48:04.253701Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"994888ec67f4fa0b","local-member-attributes":"{Name:pause-865564 ClientURLs:[https://192.168.72.173:2379]}","request-path":"/0/members/994888ec67f4fa0b/attributes","cluster-id":"37dfb6762fe83f6","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-14T21:48:04.253721Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:48:04.253954Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-14T21:48:04.253995Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-14T21:48:04.253749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-14T21:48:04.254536Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:48:04.254572Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-14T21:48:04.255312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.173:2379"}
	{"level":"info","ts":"2025-02-14T21:48:04.255577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:48:22 up 2 min,  0 users,  load average: 0.57, 0.22, 0.08
	Linux pause-865564 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [06a0792798b777e455362c4bd61cfd602bc74ccaba7374b8f41106fd9449d516] <==
	
	
	==> kube-apiserver [175c27162d025a602b89ed4db495b13de2fe12396e88e70a9a23f425bf407edc] <==
	I0214 21:48:05.539035       1 autoregister_controller.go:144] Starting autoregister controller
	I0214 21:48:05.539061       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 21:48:05.539082       1 cache.go:39] Caches are synced for autoregister controller
	I0214 21:48:05.578488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0214 21:48:05.585776       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0214 21:48:05.591191       1 policy_source.go:240] refreshing policies
	I0214 21:48:05.619547       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0214 21:48:05.619798       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0214 21:48:05.619834       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0214 21:48:05.621664       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0214 21:48:05.622565       1 shared_informer.go:320] Caches are synced for configmaps
	I0214 21:48:05.622641       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0214 21:48:05.622739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 21:48:05.629489       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0214 21:48:05.631988       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	E0214 21:48:05.632976       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0214 21:48:05.643062       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 21:48:06.423565       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 21:48:07.324267       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0214 21:48:07.374297       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0214 21:48:07.404823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 21:48:07.410815       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 21:48:08.701017       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0214 21:48:08.752333       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 21:48:08.800755       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [75937c9e58c2849c79594e7796dfe7ecf9c76a45d46ed6715cc692281cf8ac3e] <==
	I0214 21:46:55.033272       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:46:55.037891       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:46:55.051440       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-865564" podCIDRs=["10.244.0.0/24"]
	I0214 21:46:55.051467       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.051490       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.085268       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:46:55.085349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0214 21:46:55.085374       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0214 21:46:55.192160       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:55.953646       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:46:56.186880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="535.380912ms"
	I0214 21:46:56.250643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.667363ms"
	I0214 21:46:56.251179       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="300.607µs"
	I0214 21:46:56.279210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="161.545µs"
	I0214 21:46:56.666710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.934231ms"
	I0214 21:46:56.679769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.900677ms"
	I0214 21:46:56.680500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="129.881µs"
	I0214 21:46:58.136885       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.312µs"
	I0214 21:46:58.161522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="200.492µs"
	I0214 21:47:02.362911       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:47:07.863924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.555µs"
	I0214 21:47:08.198305       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="68.641µs"
	I0214 21:47:08.204402       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="126.468µs"
	I0214 21:47:36.372180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.944132ms"
	I0214 21:47:36.372798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="40.846µs"
	
	
	==> kube-controller-manager [a4eae03b6f655e2a5e4b67a3ffed88bc9e2dfca485d4189a9dfae74c43792449] <==
	I0214 21:48:08.387152       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-865564"
	I0214 21:48:08.387187       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0214 21:48:08.389619       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0214 21:48:08.391286       1 shared_informer.go:320] Caches are synced for PV protection
	I0214 21:48:08.398494       1 shared_informer.go:320] Caches are synced for HPA
	I0214 21:48:08.399575       1 shared_informer.go:320] Caches are synced for endpoint
	I0214 21:48:08.399678       1 shared_informer.go:320] Caches are synced for daemon sets
	I0214 21:48:08.399721       1 shared_informer.go:320] Caches are synced for ephemeral
	I0214 21:48:08.401219       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0214 21:48:08.401300       1 shared_informer.go:320] Caches are synced for GC
	I0214 21:48:08.406310       1 shared_informer.go:320] Caches are synced for node
	I0214 21:48:08.406381       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0214 21:48:08.406444       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0214 21:48:08.406452       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0214 21:48:08.406460       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0214 21:48:08.406521       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-865564"
	I0214 21:48:08.412245       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:48:08.413516       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0214 21:48:08.418024       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0214 21:48:08.430487       1 shared_informer.go:320] Caches are synced for garbage collector
	I0214 21:48:08.439082       1 shared_informer.go:320] Caches are synced for resource quota
	I0214 21:48:08.708510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="331.775662ms"
	I0214 21:48:08.709052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.277µs"
	I0214 21:48:12.454745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.871953ms"
	I0214 21:48:12.455423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.683µs"
	
	
	==> kube-proxy [2f53a3f99f0a42ee1983ecc134525f7a4cfef06be1c07f07a84927c196152a42] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0214 21:48:06.105348       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0214 21:48:06.121095       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.173"]
	E0214 21:48:06.121291       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0214 21:48:06.167485       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0214 21:48:06.167561       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0214 21:48:06.167602       1 server_linux.go:170] "Using iptables Proxier"
	I0214 21:48:06.171005       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0214 21:48:06.171393       1 server.go:497] "Version info" version="v1.32.1"
	I0214 21:48:06.171438       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:48:06.172852       1 config.go:199] "Starting service config controller"
	I0214 21:48:06.172918       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0214 21:48:06.172974       1 config.go:105] "Starting endpoint slice config controller"
	I0214 21:48:06.172991       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0214 21:48:06.173762       1 config.go:329] "Starting node config controller"
	I0214 21:48:06.173845       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0214 21:48:06.273821       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0214 21:48:06.273977       1 shared_informer.go:320] Caches are synced for service config
	I0214 21:48:06.274464       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764] <==
	
	
	==> kube-scheduler [57abf002fc6e47a3b1640d080e548ba9d8cdfb6f358e956252efe72c92f77479] <==
	I0214 21:48:03.353448       1 serving.go:386] Generated self-signed cert in-memory
	W0214 21:48:05.473092       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0214 21:48:05.473518       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 21:48:05.473619       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0214 21:48:05.473651       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 21:48:05.570281       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0214 21:48:05.570370       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 21:48:05.572890       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0214 21:48:05.572933       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 21:48:05.573518       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0214 21:48:05.573614       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0214 21:48:05.673360       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b7a619fcf72a4bf4f51a407c6dcd4c7a1520032870b25a0eaa439dc9210928f5] <==
	
	
	==> kubelet <==
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.723859    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.730091    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.741579    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:02 pause-865564 kubelet[3582]: E0214 21:48:02.745303    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.753869    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.754332    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:03 pause-865564 kubelet[3582]: E0214 21:48:03.754698    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.758305    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.758888    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:04 pause-865564 kubelet[3582]: E0214 21:48:04.759361    3582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-865564\" not found" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.545239    3582 apiserver.go:52] "Watching apiserver"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.567349    3582 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.627966    3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf00b685-019b-4e7f-a4eb-68ea96a926fa-lib-modules\") pod \"kube-proxy-ctmk4\" (UID: \"bf00b685-019b-4e7f-a4eb-68ea96a926fa\") " pod="kube-system/kube-proxy-ctmk4"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.628190    3582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf00b685-019b-4e7f-a4eb-68ea96a926fa-xtables-lock\") pod \"kube-proxy-ctmk4\" (UID: \"bf00b685-019b-4e7f-a4eb-68ea96a926fa\") " pod="kube-system/kube-proxy-ctmk4"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680478    3582 kubelet_node_status.go:125] "Node was previously registered" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680530    3582 kubelet_node_status.go:79] "Successfully registered node" node="pause-865564"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.680550    3582 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.681462    3582 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.849261    3582 scope.go:117] "RemoveContainer" containerID="34e6f11b1e6a8155ff8d86f019d2c5d50c11250d28e80ee911a476c8f45ed464"
	Feb 14 21:48:05 pause-865564 kubelet[3582]: I0214 21:48:05.849498    3582 scope.go:117] "RemoveContainer" containerID="62ab3dcff10375340c0cb28180ce7eb6f21547e4ba99af8f5c11ebc22ea5f764"
	Feb 14 21:48:11 pause-865564 kubelet[3582]: E0214 21:48:11.691675    3582 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569691690871949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:48:11 pause-865564 kubelet[3582]: E0214 21:48:11.692255    3582 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569691690871949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:48:12 pause-865564 kubelet[3582]: I0214 21:48:12.419208    3582 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Feb 14 21:48:21 pause-865564 kubelet[3582]: E0214 21:48:21.693930    3582 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569701693536693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 14 21:48:21 pause-865564 kubelet[3582]: E0214 21:48:21.693998    3582 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739569701693536693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-865564 -n pause-865564
helpers_test.go:261: (dbg) Run:  kubectl --context pause-865564 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (45.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (333.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0214 21:49:56.451200  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:50:13.381571  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m33.622912923s)

                                                
                                                
-- stdout --
	* [old-k8s-version-201745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-201745" primary control-plane node in "old-k8s-version-201745" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:49:50.556636  290030 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:49:50.556721  290030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:49:50.556729  290030 out.go:358] Setting ErrFile to fd 2...
	I0214 21:49:50.556733  290030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:49:50.557374  290030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:49:50.558297  290030 out.go:352] Setting JSON to false
	I0214 21:49:50.559474  290030 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9135,"bootTime":1739560656,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:49:50.559576  290030 start.go:140] virtualization: kvm guest
	I0214 21:49:50.561220  290030 out.go:177] * [old-k8s-version-201745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:49:50.562817  290030 notify.go:220] Checking for updates...
	I0214 21:49:50.562832  290030 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:49:50.563953  290030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:49:50.565053  290030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:49:50.566082  290030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:49:50.567119  290030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:49:50.568227  290030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:49:50.569590  290030 config.go:182] Loaded profile config "cert-expiration-191481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:49:50.569702  290030 config.go:182] Loaded profile config "cert-options-733237": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:49:50.569776  290030 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:49:50.569863  290030 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:49:50.604287  290030 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:49:50.605312  290030 start.go:304] selected driver: kvm2
	I0214 21:49:50.605329  290030 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:49:50.605350  290030 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:49:50.606325  290030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:49:50.606431  290030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:49:50.621235  290030 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:49:50.621284  290030 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 21:49:50.621529  290030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:49:50.621561  290030 cni.go:84] Creating CNI manager for ""
	I0214 21:49:50.621617  290030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:49:50.621636  290030 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 21:49:50.621701  290030 start.go:347] cluster config:
	{Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:49:50.621798  290030 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:49:50.623212  290030 out.go:177] * Starting "old-k8s-version-201745" primary control-plane node in "old-k8s-version-201745" cluster
	I0214 21:49:50.624373  290030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:49:50.624409  290030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0214 21:49:50.624419  290030 cache.go:56] Caching tarball of preloaded images
	I0214 21:49:50.624493  290030 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 21:49:50.624505  290030 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0214 21:49:50.624590  290030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json ...
	I0214 21:49:50.624611  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json: {Name:mkb20fb7bf156a2ba7e1b89585c0fa08f6be2c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:49:50.624747  290030 start.go:360] acquireMachinesLock for old-k8s-version-201745: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:50:55.363261  290030 start.go:364] duration metric: took 1m4.73848551s to acquireMachinesLock for "old-k8s-version-201745"
	I0214 21:50:55.363378  290030 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 C
lusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 21:50:55.363470  290030 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 21:50:55.365187  290030 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0214 21:50:55.365395  290030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:50:55.365456  290030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:50:55.381554  290030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
	I0214 21:50:55.381962  290030 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:50:55.382562  290030 main.go:141] libmachine: Using API Version  1
	I0214 21:50:55.382592  290030 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:50:55.382964  290030 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:50:55.383187  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:50:55.383362  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:50:55.383507  290030 start.go:159] libmachine.API.Create for "old-k8s-version-201745" (driver="kvm2")
	I0214 21:50:55.383547  290030 client.go:168] LocalClient.Create starting
	I0214 21:50:55.383582  290030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 21:50:55.383632  290030 main.go:141] libmachine: Decoding PEM data...
	I0214 21:50:55.383663  290030 main.go:141] libmachine: Parsing certificate...
	I0214 21:50:55.383754  290030 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 21:50:55.383787  290030 main.go:141] libmachine: Decoding PEM data...
	I0214 21:50:55.383805  290030 main.go:141] libmachine: Parsing certificate...
	I0214 21:50:55.383832  290030 main.go:141] libmachine: Running pre-create checks...
	I0214 21:50:55.383846  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .PreCreateCheck
	I0214 21:50:55.384196  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:50:55.384611  290030 main.go:141] libmachine: Creating machine...
	I0214 21:50:55.384627  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .Create
	I0214 21:50:55.384759  290030 main.go:141] libmachine: (old-k8s-version-201745) creating KVM machine...
	I0214 21:50:55.384788  290030 main.go:141] libmachine: (old-k8s-version-201745) creating network...
	I0214 21:50:55.385923  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found existing default KVM network
	I0214 21:50:55.387518  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.387312  290690 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:23:36:5f} reservation:<nil>}
	I0214 21:50:55.388468  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.388381  290690 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b6:ad:f4} reservation:<nil>}
	I0214 21:50:55.389476  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.389401  290690 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:c5:57} reservation:<nil>}
	I0214 21:50:55.390645  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.390547  290690 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a1920}
	I0214 21:50:55.390676  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | created network xml: 
	I0214 21:50:55.390694  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | <network>
	I0214 21:50:55.390703  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   <name>mk-old-k8s-version-201745</name>
	I0214 21:50:55.390715  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   <dns enable='no'/>
	I0214 21:50:55.390722  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   
	I0214 21:50:55.390733  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0214 21:50:55.390747  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |     <dhcp>
	I0214 21:50:55.390757  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0214 21:50:55.390768  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |     </dhcp>
	I0214 21:50:55.390776  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   </ip>
	I0214 21:50:55.390782  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG |   
	I0214 21:50:55.390790  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | </network>
	I0214 21:50:55.390797  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | 
	I0214 21:50:55.395491  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | trying to create private KVM network mk-old-k8s-version-201745 192.168.72.0/24...
	I0214 21:50:55.462132  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | private KVM network mk-old-k8s-version-201745 192.168.72.0/24 created
	I0214 21:50:55.462163  290030 main.go:141] libmachine: (old-k8s-version-201745) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745 ...
	I0214 21:50:55.462180  290030 main.go:141] libmachine: (old-k8s-version-201745) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 21:50:55.462199  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.462108  290690 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:50:55.462276  290030 main.go:141] libmachine: (old-k8s-version-201745) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 21:50:55.744579  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.744460  290690 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa...
	I0214 21:50:55.791378  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.791288  290690 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/old-k8s-version-201745.rawdisk...
	I0214 21:50:55.791407  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Writing magic tar header
	I0214 21:50:55.791424  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Writing SSH key tar header
	I0214 21:50:55.791446  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:55.791384  290690 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745 ...
	I0214 21:50:55.791567  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745
	I0214 21:50:55.791609  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 21:50:55.791631  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745 (perms=drwx------)
	I0214 21:50:55.791651  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:50:55.791666  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 21:50:55.791677  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 21:50:55.791685  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home/jenkins
	I0214 21:50:55.791698  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | checking permissions on dir: /home
	I0214 21:50:55.791712  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | skipping /home - not owner
	I0214 21:50:55.791733  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 21:50:55.791752  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 21:50:55.791767  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 21:50:55.791786  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 21:50:55.791801  290030 main.go:141] libmachine: (old-k8s-version-201745) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 21:50:55.791811  290030 main.go:141] libmachine: (old-k8s-version-201745) creating domain...
	I0214 21:50:55.792676  290030 main.go:141] libmachine: (old-k8s-version-201745) define libvirt domain using xml: 
	I0214 21:50:55.792698  290030 main.go:141] libmachine: (old-k8s-version-201745) <domain type='kvm'>
	I0214 21:50:55.792709  290030 main.go:141] libmachine: (old-k8s-version-201745)   <name>old-k8s-version-201745</name>
	I0214 21:50:55.792727  290030 main.go:141] libmachine: (old-k8s-version-201745)   <memory unit='MiB'>2200</memory>
	I0214 21:50:55.792738  290030 main.go:141] libmachine: (old-k8s-version-201745)   <vcpu>2</vcpu>
	I0214 21:50:55.792767  290030 main.go:141] libmachine: (old-k8s-version-201745)   <features>
	I0214 21:50:55.792778  290030 main.go:141] libmachine: (old-k8s-version-201745)     <acpi/>
	I0214 21:50:55.792784  290030 main.go:141] libmachine: (old-k8s-version-201745)     <apic/>
	I0214 21:50:55.792794  290030 main.go:141] libmachine: (old-k8s-version-201745)     <pae/>
	I0214 21:50:55.792811  290030 main.go:141] libmachine: (old-k8s-version-201745)     
	I0214 21:50:55.792825  290030 main.go:141] libmachine: (old-k8s-version-201745)   </features>
	I0214 21:50:55.792838  290030 main.go:141] libmachine: (old-k8s-version-201745)   <cpu mode='host-passthrough'>
	I0214 21:50:55.792851  290030 main.go:141] libmachine: (old-k8s-version-201745)   
	I0214 21:50:55.792863  290030 main.go:141] libmachine: (old-k8s-version-201745)   </cpu>
	I0214 21:50:55.792876  290030 main.go:141] libmachine: (old-k8s-version-201745)   <os>
	I0214 21:50:55.792893  290030 main.go:141] libmachine: (old-k8s-version-201745)     <type>hvm</type>
	I0214 21:50:55.792907  290030 main.go:141] libmachine: (old-k8s-version-201745)     <boot dev='cdrom'/>
	I0214 21:50:55.792920  290030 main.go:141] libmachine: (old-k8s-version-201745)     <boot dev='hd'/>
	I0214 21:50:55.792934  290030 main.go:141] libmachine: (old-k8s-version-201745)     <bootmenu enable='no'/>
	I0214 21:50:55.792954  290030 main.go:141] libmachine: (old-k8s-version-201745)   </os>
	I0214 21:50:55.792985  290030 main.go:141] libmachine: (old-k8s-version-201745)   <devices>
	I0214 21:50:55.793016  290030 main.go:141] libmachine: (old-k8s-version-201745)     <disk type='file' device='cdrom'>
	I0214 21:50:55.793038  290030 main.go:141] libmachine: (old-k8s-version-201745)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/boot2docker.iso'/>
	I0214 21:50:55.793046  290030 main.go:141] libmachine: (old-k8s-version-201745)       <target dev='hdc' bus='scsi'/>
	I0214 21:50:55.793079  290030 main.go:141] libmachine: (old-k8s-version-201745)       <readonly/>
	I0214 21:50:55.793099  290030 main.go:141] libmachine: (old-k8s-version-201745)     </disk>
	I0214 21:50:55.793122  290030 main.go:141] libmachine: (old-k8s-version-201745)     <disk type='file' device='disk'>
	I0214 21:50:55.793140  290030 main.go:141] libmachine: (old-k8s-version-201745)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 21:50:55.793158  290030 main.go:141] libmachine: (old-k8s-version-201745)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/old-k8s-version-201745.rawdisk'/>
	I0214 21:50:55.793169  290030 main.go:141] libmachine: (old-k8s-version-201745)       <target dev='hda' bus='virtio'/>
	I0214 21:50:55.793178  290030 main.go:141] libmachine: (old-k8s-version-201745)     </disk>
	I0214 21:50:55.793189  290030 main.go:141] libmachine: (old-k8s-version-201745)     <interface type='network'>
	I0214 21:50:55.793201  290030 main.go:141] libmachine: (old-k8s-version-201745)       <source network='mk-old-k8s-version-201745'/>
	I0214 21:50:55.793209  290030 main.go:141] libmachine: (old-k8s-version-201745)       <model type='virtio'/>
	I0214 21:50:55.793223  290030 main.go:141] libmachine: (old-k8s-version-201745)     </interface>
	I0214 21:50:55.793241  290030 main.go:141] libmachine: (old-k8s-version-201745)     <interface type='network'>
	I0214 21:50:55.793255  290030 main.go:141] libmachine: (old-k8s-version-201745)       <source network='default'/>
	I0214 21:50:55.793263  290030 main.go:141] libmachine: (old-k8s-version-201745)       <model type='virtio'/>
	I0214 21:50:55.793275  290030 main.go:141] libmachine: (old-k8s-version-201745)     </interface>
	I0214 21:50:55.793283  290030 main.go:141] libmachine: (old-k8s-version-201745)     <serial type='pty'>
	I0214 21:50:55.793295  290030 main.go:141] libmachine: (old-k8s-version-201745)       <target port='0'/>
	I0214 21:50:55.793302  290030 main.go:141] libmachine: (old-k8s-version-201745)     </serial>
	I0214 21:50:55.793318  290030 main.go:141] libmachine: (old-k8s-version-201745)     <console type='pty'>
	I0214 21:50:55.793332  290030 main.go:141] libmachine: (old-k8s-version-201745)       <target type='serial' port='0'/>
	I0214 21:50:55.793345  290030 main.go:141] libmachine: (old-k8s-version-201745)     </console>
	I0214 21:50:55.793355  290030 main.go:141] libmachine: (old-k8s-version-201745)     <rng model='virtio'>
	I0214 21:50:55.793370  290030 main.go:141] libmachine: (old-k8s-version-201745)       <backend model='random'>/dev/random</backend>
	I0214 21:50:55.793385  290030 main.go:141] libmachine: (old-k8s-version-201745)     </rng>
	I0214 21:50:55.793394  290030 main.go:141] libmachine: (old-k8s-version-201745)     
	I0214 21:50:55.793402  290030 main.go:141] libmachine: (old-k8s-version-201745)     
	I0214 21:50:55.793419  290030 main.go:141] libmachine: (old-k8s-version-201745)   </devices>
	I0214 21:50:55.793428  290030 main.go:141] libmachine: (old-k8s-version-201745) </domain>
	I0214 21:50:55.793438  290030 main.go:141] libmachine: (old-k8s-version-201745) 
	I0214 21:50:55.797313  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:cc:46:24 in network default
	I0214 21:50:55.797892  290030 main.go:141] libmachine: (old-k8s-version-201745) starting domain...
	I0214 21:50:55.797914  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:55.797922  290030 main.go:141] libmachine: (old-k8s-version-201745) ensuring networks are active...
	I0214 21:50:55.798541  290030 main.go:141] libmachine: (old-k8s-version-201745) Ensuring network default is active
	I0214 21:50:55.798875  290030 main.go:141] libmachine: (old-k8s-version-201745) Ensuring network mk-old-k8s-version-201745 is active
	I0214 21:50:55.799462  290030 main.go:141] libmachine: (old-k8s-version-201745) getting domain XML...
	I0214 21:50:55.800146  290030 main.go:141] libmachine: (old-k8s-version-201745) creating domain...
	I0214 21:50:56.148493  290030 main.go:141] libmachine: (old-k8s-version-201745) waiting for IP...
	I0214 21:50:56.149380  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:56.149909  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:56.150019  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:56.149906  290690 retry.go:31] will retry after 261.837264ms: waiting for domain to come up
	I0214 21:50:56.413460  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:56.414105  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:56.414163  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:56.414075  290690 retry.go:31] will retry after 330.242166ms: waiting for domain to come up
	I0214 21:50:56.746462  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:56.747030  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:56.747061  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:56.746989  290690 retry.go:31] will retry after 416.03943ms: waiting for domain to come up
	I0214 21:50:57.164718  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:57.165222  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:57.165261  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:57.165204  290690 retry.go:31] will retry after 578.08552ms: waiting for domain to come up
	I0214 21:50:57.746263  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:57.747028  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:57.747059  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:57.746994  290690 retry.go:31] will retry after 651.818557ms: waiting for domain to come up
	I0214 21:50:58.400987  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:58.401535  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:58.401566  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:58.401487  290690 retry.go:31] will retry after 588.56538ms: waiting for domain to come up
	I0214 21:50:58.991361  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:58.991875  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:58.991907  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:58.991851  290690 retry.go:31] will retry after 932.433175ms: waiting for domain to come up
	I0214 21:50:59.925797  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:50:59.926454  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:50:59.926486  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:50:59.926398  290690 retry.go:31] will retry after 1.189025506s: waiting for domain to come up
	I0214 21:51:01.116921  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:01.117469  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:01.117495  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:01.117446  290690 retry.go:31] will retry after 1.257559774s: waiting for domain to come up
	I0214 21:51:02.376775  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:02.377116  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:02.377144  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:02.377098  290690 retry.go:31] will retry after 1.619179944s: waiting for domain to come up
	I0214 21:51:03.998332  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:03.998979  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:03.999012  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:03.998926  290690 retry.go:31] will retry after 2.080022335s: waiting for domain to come up
	I0214 21:51:06.080451  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:06.080994  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:06.081026  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:06.080958  290690 retry.go:31] will retry after 2.295955346s: waiting for domain to come up
	I0214 21:51:08.378010  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:08.378472  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:08.378508  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:08.378432  290690 retry.go:31] will retry after 3.539619619s: waiting for domain to come up
	I0214 21:51:11.920280  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:11.920742  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:51:11.920773  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:51:11.920699  290690 retry.go:31] will retry after 3.678466874s: waiting for domain to come up
	I0214 21:51:15.640099  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.640675  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has current primary IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.640700  290030 main.go:141] libmachine: (old-k8s-version-201745) found domain IP: 192.168.72.19
	I0214 21:51:15.640713  290030 main.go:141] libmachine: (old-k8s-version-201745) reserving static IP address...
	I0214 21:51:15.640986  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-201745", mac: "52:54:00:6d:30:ba", ip: "192.168.72.19"} in network mk-old-k8s-version-201745
	I0214 21:51:15.715520  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Getting to WaitForSSH function...
	I0214 21:51:15.715550  290030 main.go:141] libmachine: (old-k8s-version-201745) reserved static IP address 192.168.72.19 for domain old-k8s-version-201745
	I0214 21:51:15.715563  290030 main.go:141] libmachine: (old-k8s-version-201745) waiting for SSH...
	I0214 21:51:15.718541  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:15.719032  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745
	I0214 21:51:15.719060  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find defined IP address of network mk-old-k8s-version-201745 interface with MAC address 52:54:00:6d:30:ba
	I0214 21:51:15.719260  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH client type: external
	I0214 21:51:15.719310  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa (-rw-------)
	I0214 21:51:15.719384  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:51:15.719407  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | About to run SSH command:
	I0214 21:51:15.719421  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | exit 0
	I0214 21:51:15.723259  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | SSH cmd err, output: exit status 255: 
	I0214 21:51:15.723284  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0214 21:51:15.723294  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | command : exit 0
	I0214 21:51:15.723305  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | err     : exit status 255
	I0214 21:51:15.723318  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | output  : 
	I0214 21:51:18.723432  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Getting to WaitForSSH function...
	I0214 21:51:18.725873  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.726290  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.726324  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.726416  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH client type: external
	I0214 21:51:18.726440  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa (-rw-------)
	I0214 21:51:18.726492  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:51:18.726516  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | About to run SSH command:
	I0214 21:51:18.726559  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | exit 0
	I0214 21:51:18.854734  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | SSH cmd err, output: <nil>: 
	I0214 21:51:18.854978  290030 main.go:141] libmachine: (old-k8s-version-201745) KVM machine creation complete
	I0214 21:51:18.855264  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:51:18.855878  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:18.856070  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:18.856221  290030 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 21:51:18.856246  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetState
	I0214 21:51:18.857655  290030 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 21:51:18.857667  290030 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 21:51:18.857672  290030 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 21:51:18.857678  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:18.860018  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.860340  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.860362  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.860546  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:18.860711  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.860828  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.860966  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:18.861120  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:18.861388  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:18.861403  290030 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 21:51:18.973467  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:51:18.973488  290030 main.go:141] libmachine: Detecting the provisioner...
	I0214 21:51:18.973498  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:18.975816  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.976116  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:18.976159  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:18.976279  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:18.976456  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.976572  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:18.976662  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:18.976784  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:18.976987  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:18.977004  290030 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 21:51:19.090945  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 21:51:19.091009  290030 main.go:141] libmachine: found compatible host: buildroot
	I0214 21:51:19.091023  290030 main.go:141] libmachine: Provisioning with buildroot...
	I0214 21:51:19.091033  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.091241  290030 buildroot.go:166] provisioning hostname "old-k8s-version-201745"
	I0214 21:51:19.091270  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.091452  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.094065  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.094414  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.094440  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.094593  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.094795  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.094958  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.095110  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.095272  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.095437  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.095454  290030 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-201745 && echo "old-k8s-version-201745" | sudo tee /etc/hostname
	I0214 21:51:19.220062  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-201745
	
	I0214 21:51:19.220089  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.223057  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.223416  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.223447  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.223621  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.223801  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.223975  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.224107  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.224265  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.224482  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.224505  290030 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-201745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-201745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-201745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:51:19.343025  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:51:19.343046  290030 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:51:19.343072  290030 buildroot.go:174] setting up certificates
	I0214 21:51:19.343085  290030 provision.go:84] configureAuth start
	I0214 21:51:19.343094  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:51:19.343305  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:19.345461  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.345781  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.345802  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.346004  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.348488  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.348896  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.348924  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.349075  290030 provision.go:143] copyHostCerts
	I0214 21:51:19.349175  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:51:19.349195  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:51:19.349262  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:51:19.349347  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:51:19.349355  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:51:19.349376  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:51:19.349425  290030 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:51:19.349431  290030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:51:19.349447  290030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:51:19.349490  290030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-201745 san=[127.0.0.1 192.168.72.19 localhost minikube old-k8s-version-201745]
	I0214 21:51:19.490071  290030 provision.go:177] copyRemoteCerts
	I0214 21:51:19.490142  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:51:19.490171  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.492319  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.492662  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.492693  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.492871  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.493054  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.493217  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.493348  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:19.580628  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0214 21:51:19.606167  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:51:19.630070  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:51:19.654540  290030 provision.go:87] duration metric: took 311.444497ms to configureAuth
	I0214 21:51:19.654561  290030 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:51:19.654747  290030 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:51:19.654829  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.657192  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.657555  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.657605  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.657786  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.657983  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.658158  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.658304  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.658512  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:19.658770  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:19.658789  290030 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:51:19.895793  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:51:19.895830  290030 main.go:141] libmachine: Checking connection to Docker...
	I0214 21:51:19.895843  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetURL
	I0214 21:51:19.897101  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | using libvirt version 6000000
	I0214 21:51:19.899085  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.899443  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.899474  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.899641  290030 main.go:141] libmachine: Docker is up and running!
	I0214 21:51:19.899659  290030 main.go:141] libmachine: Reticulating splines...
	I0214 21:51:19.899668  290030 client.go:171] duration metric: took 24.516107336s to LocalClient.Create
	I0214 21:51:19.899696  290030 start.go:167] duration metric: took 24.516190058s to libmachine.API.Create "old-k8s-version-201745"
	I0214 21:51:19.899707  290030 start.go:293] postStartSetup for "old-k8s-version-201745" (driver="kvm2")
	I0214 21:51:19.899716  290030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:51:19.899733  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:19.899970  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:51:19.899997  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:19.901854  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.902204  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:19.902252  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:19.902409  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:19.902568  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:19.902752  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:19.902925  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:19.988715  290030 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:51:19.992916  290030 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:51:19.992942  290030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:51:19.992999  290030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:51:19.993102  290030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:51:19.993218  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:51:20.002821  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:20.027189  290030 start.go:296] duration metric: took 127.471428ms for postStartSetup
	I0214 21:51:20.027234  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:51:20.027754  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:20.030174  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.030496  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.030541  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.030800  290030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json ...
	I0214 21:51:20.031021  290030 start.go:128] duration metric: took 24.667536425s to createHost
	I0214 21:51:20.031048  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.033286  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.033584  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.033612  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.033720  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.033920  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.034081  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.034221  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.034383  290030 main.go:141] libmachine: Using SSH client type: native
	I0214 21:51:20.034560  290030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:51:20.034571  290030 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:51:20.146699  290030 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739569880.118420137
	
	I0214 21:51:20.146719  290030 fix.go:216] guest clock: 1739569880.118420137
	I0214 21:51:20.146726  290030 fix.go:229] Guest: 2025-02-14 21:51:20.118420137 +0000 UTC Remote: 2025-02-14 21:51:20.031034691 +0000 UTC m=+89.511546951 (delta=87.385446ms)
	I0214 21:51:20.146742  290030 fix.go:200] guest clock delta is within tolerance: 87.385446ms
	I0214 21:51:20.146747  290030 start.go:83] releasing machines lock for "old-k8s-version-201745", held for 24.783455513s
	I0214 21:51:20.146767  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.146964  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:20.149585  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.149939  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.149964  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.150137  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150597  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150786  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:51:20.150893  290030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:51:20.150936  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.151011  290030 ssh_runner.go:195] Run: cat /version.json
	I0214 21:51:20.151035  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:51:20.153637  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.153678  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.153993  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.154014  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.154038  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:20.154055  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:20.154276  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.154357  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:51:20.154439  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.154495  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:51:20.154585  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.154643  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:51:20.154683  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:20.154998  290030 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:51:20.259516  290030 ssh_runner.go:195] Run: systemctl --version
	I0214 21:51:20.265435  290030 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:51:20.423713  290030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:51:20.430194  290030 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:51:20.430249  290030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:51:20.447320  290030 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 21:51:20.447350  290030 start.go:495] detecting cgroup driver to use...
	I0214 21:51:20.447403  290030 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:51:20.463539  290030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:51:20.477217  290030 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:51:20.477295  290030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:51:20.490458  290030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:51:20.506265  290030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:51:20.635362  290030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:51:20.791913  290030 docker.go:233] disabling docker service ...
	I0214 21:51:20.791970  290030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:51:20.808889  290030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:51:20.822017  290030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:51:20.944211  290030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:51:21.080254  290030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:51:21.095875  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:51:21.114491  290030 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0214 21:51:21.114553  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.125025  290030 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:51:21.125074  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.135881  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.146571  290030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:51:21.156909  290030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:51:21.167466  290030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:51:21.176847  290030 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 21:51:21.176902  290030 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 21:51:21.189597  290030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:51:21.198779  290030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:21.311520  290030 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:51:21.406134  290030 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:51:21.406221  290030 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:51:21.410963  290030 start.go:563] Will wait 60s for crictl version
	I0214 21:51:21.411013  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:21.414903  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:51:21.454276  290030 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:51:21.454366  290030 ssh_runner.go:195] Run: crio --version
	I0214 21:51:21.481275  290030 ssh_runner.go:195] Run: crio --version
	I0214 21:51:21.509347  290030 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0214 21:51:21.510550  290030 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:51:21.512928  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:21.513300  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:51:21.513329  290030 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:51:21.513550  290030 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0214 21:51:21.517441  290030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:51:21.529465  290030 kubeadm.go:875] updating cluster {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:51:21.529594  290030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:51:21.529640  290030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:21.560058  290030 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:51:21.560112  290030 ssh_runner.go:195] Run: which lz4
	I0214 21:51:21.563873  290030 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 21:51:21.567845  290030 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 21:51:21.567874  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0214 21:51:23.150723  290030 crio.go:462] duration metric: took 1.586877998s to copy over tarball
	I0214 21:51:23.150796  290030 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 21:51:25.603024  290030 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.452191708s)
	I0214 21:51:25.603059  290030 crio.go:469] duration metric: took 2.452307853s to extract the tarball
	I0214 21:51:25.603069  290030 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 21:51:25.647042  290030 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:51:25.691327  290030 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:51:25.691354  290030 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 21:51:25.691442  290030 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:25.691458  290030 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.691475  290030 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0214 21:51:25.691483  290030 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.691465  290030 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.691449  290030 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.691546  290030 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.691570  290030 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.693361  290030 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.693464  290030 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.693499  290030 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:25.693511  290030 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.693366  290030 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.693788  290030 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.694029  290030 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.694062  290030 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 21:51:25.846856  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.857207  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.858123  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.868799  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:25.871190  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0214 21:51:25.883774  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0214 21:51:25.890757  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:25.937841  290030 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0214 21:51:25.937904  290030 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:25.937955  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:25.974145  290030 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0214 21:51:25.974181  290030 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:25.974191  290030 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0214 21:51:25.974225  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:25.974229  290030 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:25.974271  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.019744  290030 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0214 21:51:26.019798  290030 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.019862  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.025283  290030 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0214 21:51:26.025329  290030 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.025381  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.033932  290030 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0214 21:51:26.033957  290030 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0214 21:51:26.033973  290030 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.033978  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.033989  290030 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 21:51:26.034010  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.034028  290030 ssh_runner.go:195] Run: which crictl
	I0214 21:51:26.034066  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.034102  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.034116  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.034080  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.112992  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.160461  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.160467  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.163253  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.163321  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.163352  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.163369  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.215854  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:51:26.305663  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.325715  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:51:26.325765  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.325778  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:51:26.325715  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:51:26.325778  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:51:26.350427  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0214 21:51:26.489615  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:51:26.515867  290030 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:51:26.515959  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0214 21:51:26.515998  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0214 21:51:26.516063  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0214 21:51:26.516087  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0214 21:51:26.536809  290030 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:51:26.539975  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0214 21:51:26.574869  290030 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0214 21:51:26.696128  290030 cache_images.go:92] duration metric: took 1.004755714s to LoadCachedImages
	W0214 21:51:26.696227  290030 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0214 21:51:26.696245  290030 kubeadm.go:926] updating node { 192.168.72.19 8443 v1.20.0 crio true true} ...
	I0214 21:51:26.696373  290030 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-201745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:51:26.696466  290030 ssh_runner.go:195] Run: crio config
	I0214 21:51:26.746613  290030 cni.go:84] Creating CNI manager for ""
	I0214 21:51:26.746658  290030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:51:26.746670  290030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:51:26.746697  290030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-201745 NodeName:old-k8s-version-201745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 21:51:26.746885  290030 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-201745"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:51:26.746970  290030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0214 21:51:26.757127  290030 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:51:26.757199  290030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:51:26.766779  290030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0214 21:51:26.787809  290030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:51:26.805088  290030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0214 21:51:26.824439  290030 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0214 21:51:26.828275  290030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:51:26.840675  290030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:51:26.964411  290030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:51:26.982471  290030 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745 for IP: 192.168.72.19
	I0214 21:51:26.982493  290030 certs.go:194] generating shared ca certs ...
	I0214 21:51:26.982513  290030 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:26.982702  290030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:51:26.982762  290030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:51:26.982776  290030 certs.go:256] generating profile certs ...
	I0214 21:51:26.982866  290030 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key
	I0214 21:51:26.982883  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt with IP's: []
	I0214 21:51:27.086210  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt ...
	I0214 21:51:27.086243  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.crt: {Name:mk78690042ad4da1a6a4edca3f1fc615ab233f5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.086454  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key ...
	I0214 21:51:27.086476  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key: {Name:mk9dcc9f8bf351125336639900feaa5a54463656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.086614  290030 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282
	I0214 21:51:27.086666  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.19]
	I0214 21:51:27.176437  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 ...
	I0214 21:51:27.176465  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282: {Name:mkbedbb12462578a35a6cf17b6a8d3bfc9a61c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.183135  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282 ...
	I0214 21:51:27.183163  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282: {Name:mk8e3fd4279cbf58b4cf8bc88b52058b57b99cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.183287  290030 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt.0d7fe282 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt
	I0214 21:51:27.183414  290030 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key
	I0214 21:51:27.183509  290030 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key
	I0214 21:51:27.183532  290030 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt with IP's: []
	I0214 21:51:27.332957  290030 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt ...
	I0214 21:51:27.332985  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt: {Name:mk0521fddd5fd5b15f245469d92dd539e5ce995e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.333186  290030 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key ...
	I0214 21:51:27.333205  290030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key: {Name:mkb179c5d1b5603349d7002e5cbe42b54cae6bf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:51:27.333437  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:51:27.333494  290030 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:51:27.333511  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:51:27.333540  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:51:27.333574  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:51:27.333607  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:51:27.333661  290030 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:51:27.334378  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:51:27.365750  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:51:27.391666  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:51:27.427686  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:51:27.456842  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0214 21:51:27.488943  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 21:51:27.516957  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:51:27.542594  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 21:51:27.573435  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:51:27.601362  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:51:27.626583  290030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:51:27.653893  290030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:51:27.670757  290030 ssh_runner.go:195] Run: openssl version
	I0214 21:51:27.676763  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:51:27.688401  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.692797  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.692854  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:51:27.698895  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:51:27.709384  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:51:27.719630  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.724214  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.724271  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:51:27.729857  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:51:27.741344  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:51:27.752386  290030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.757249  290030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.757294  290030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:51:27.763492  290030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:51:27.774762  290030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:51:27.779055  290030 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 21:51:27.779111  290030 kubeadm.go:392] StartCluster: {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:51:27.779208  290030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:51:27.779257  290030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:51:27.837079  290030 cri.go:89] found id: ""
	I0214 21:51:27.837150  290030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:51:27.853732  290030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:51:27.868300  290030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:51:27.879448  290030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:51:27.879473  290030 kubeadm.go:157] found existing configuration files:
	
	I0214 21:51:27.879526  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:51:27.889031  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:51:27.889088  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:51:27.901374  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:51:27.912717  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:51:27.912778  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:51:27.924914  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:51:27.942213  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:51:27.942284  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:51:27.959684  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:51:27.969409  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:51:27.969462  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:51:27.979150  290030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 21:51:28.113879  290030 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 21:51:28.114157  290030 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:51:28.275275  290030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:51:28.275415  290030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:51:28.275590  290030 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 21:51:28.459073  290030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:51:28.571529  290030 out.go:235]   - Generating certificates and keys ...
	I0214 21:51:28.571668  290030 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:51:28.571801  290030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:51:28.571917  290030 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 21:51:28.781276  290030 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 21:51:28.889122  290030 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 21:51:29.037057  290030 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 21:51:29.163037  290030 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 21:51:29.163491  290030 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	I0214 21:51:29.328250  290030 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 21:51:29.328454  290030 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	I0214 21:51:29.525978  290030 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 21:51:29.691207  290030 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 21:51:29.820111  290030 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 21:51:29.820488  290030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:51:29.977726  290030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:51:30.129358  290030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:51:30.278856  290030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:51:30.408865  290030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:51:30.435199  290030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:51:30.436112  290030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:51:30.436182  290030 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:51:30.584138  290030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:51:30.672210  290030 out.go:235]   - Booting up control plane ...
	I0214 21:51:30.672342  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:51:30.672469  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:51:30.672596  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:51:30.672708  290030 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:51:30.672865  290030 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 21:52:10.609271  290030 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 21:52:10.610103  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:52:10.610376  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:52:15.610695  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:52:15.610994  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:52:25.610563  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:52:25.610936  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:52:45.609973  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:52:45.610256  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:53:25.611196  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:53:25.611464  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:53:25.611485  290030 kubeadm.go:310] 
	I0214 21:53:25.611526  290030 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 21:53:25.611608  290030 kubeadm.go:310] 		timed out waiting for the condition
	I0214 21:53:25.611626  290030 kubeadm.go:310] 
	I0214 21:53:25.611654  290030 kubeadm.go:310] 	This error is likely caused by:
	I0214 21:53:25.611683  290030 kubeadm.go:310] 		- The kubelet is not running
	I0214 21:53:25.611823  290030 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 21:53:25.611835  290030 kubeadm.go:310] 
	I0214 21:53:25.611973  290030 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 21:53:25.612027  290030 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 21:53:25.612072  290030 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 21:53:25.612097  290030 kubeadm.go:310] 
	I0214 21:53:25.612292  290030 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 21:53:25.612412  290030 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 21:53:25.612422  290030 kubeadm.go:310] 
	I0214 21:53:25.612535  290030 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 21:53:25.612653  290030 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 21:53:25.612760  290030 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 21:53:25.612856  290030 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 21:53:25.612867  290030 kubeadm.go:310] 
	I0214 21:53:25.613059  290030 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:53:25.613187  290030 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 21:53:25.613306  290030 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 21:53:25.613482  290030 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-201745] and IPs [192.168.72.19 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 21:53:25.613560  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 21:53:26.871600  290030 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.257992755s)
	I0214 21:53:26.871711  290030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:53:26.886378  290030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:53:26.896124  290030 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:53:26.896143  290030 kubeadm.go:157] found existing configuration files:
	
	I0214 21:53:26.896182  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:53:26.906123  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:53:26.906171  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:53:26.915623  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:53:26.924730  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:53:26.924786  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:53:26.934146  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:53:26.944196  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:53:26.944238  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:53:26.954196  290030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:53:26.964349  290030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:53:26.964399  290030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:53:26.975587  290030 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 21:53:27.048326  290030 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 21:53:27.048373  290030 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 21:53:27.206497  290030 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 21:53:27.206651  290030 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 21:53:27.206816  290030 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 21:53:27.383228  290030 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 21:53:27.385217  290030 out.go:235]   - Generating certificates and keys ...
	I0214 21:53:27.385322  290030 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 21:53:27.385384  290030 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 21:53:27.385458  290030 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 21:53:27.385510  290030 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 21:53:27.385570  290030 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 21:53:27.385617  290030 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 21:53:27.385671  290030 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 21:53:27.385733  290030 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 21:53:27.385843  290030 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 21:53:27.385981  290030 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 21:53:27.386036  290030 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 21:53:27.386108  290030 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 21:53:27.545840  290030 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 21:53:27.644111  290030 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 21:53:28.191186  290030 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 21:53:28.249994  290030 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 21:53:28.265248  290030 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 21:53:28.265972  290030 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 21:53:28.266046  290030 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 21:53:28.392734  290030 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 21:53:28.394451  290030 out.go:235]   - Booting up control plane ...
	I0214 21:53:28.394587  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 21:53:28.402752  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 21:53:28.403873  290030 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 21:53:28.404643  290030 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 21:53:28.408763  290030 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 21:54:08.411434  290030 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 21:54:08.411652  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:54:08.411899  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:54:13.412787  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:54:13.413031  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:54:23.413691  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:54:23.413900  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:54:43.413423  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:54:43.413689  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:55:23.412955  290030 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 21:55:23.413232  290030 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 21:55:23.413250  290030 kubeadm.go:310] 
	I0214 21:55:23.413325  290030 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 21:55:23.413407  290030 kubeadm.go:310] 		timed out waiting for the condition
	I0214 21:55:23.413428  290030 kubeadm.go:310] 
	I0214 21:55:23.413471  290030 kubeadm.go:310] 	This error is likely caused by:
	I0214 21:55:23.413537  290030 kubeadm.go:310] 		- The kubelet is not running
	I0214 21:55:23.413694  290030 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 21:55:23.413705  290030 kubeadm.go:310] 
	I0214 21:55:23.413858  290030 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 21:55:23.413909  290030 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 21:55:23.413960  290030 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 21:55:23.413970  290030 kubeadm.go:310] 
	I0214 21:55:23.414121  290030 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 21:55:23.414245  290030 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 21:55:23.414260  290030 kubeadm.go:310] 
	I0214 21:55:23.414427  290030 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 21:55:23.414545  290030 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 21:55:23.414659  290030 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 21:55:23.414746  290030 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 21:55:23.414753  290030 kubeadm.go:310] 
	I0214 21:55:23.415747  290030 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 21:55:23.415865  290030 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 21:55:23.415941  290030 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 21:55:23.416022  290030 kubeadm.go:394] duration metric: took 3m55.636912816s to StartCluster
	I0214 21:55:23.416076  290030 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:55:23.416139  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:55:23.472700  290030 cri.go:89] found id: ""
	I0214 21:55:23.472725  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.472736  290030 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:55:23.472745  290030 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:55:23.472803  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:55:23.512255  290030 cri.go:89] found id: ""
	I0214 21:55:23.512280  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.512291  290030 logs.go:284] No container was found matching "etcd"
	I0214 21:55:23.512301  290030 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:55:23.512358  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:55:23.548981  290030 cri.go:89] found id: ""
	I0214 21:55:23.549000  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.549008  290030 logs.go:284] No container was found matching "coredns"
	I0214 21:55:23.549014  290030 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:55:23.549051  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:55:23.611827  290030 cri.go:89] found id: ""
	I0214 21:55:23.611868  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.611878  290030 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:55:23.611889  290030 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:55:23.611953  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:55:23.647207  290030 cri.go:89] found id: ""
	I0214 21:55:23.647234  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.647241  290030 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:55:23.647248  290030 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:55:23.647298  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:55:23.683110  290030 cri.go:89] found id: ""
	I0214 21:55:23.683134  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.683154  290030 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:55:23.683163  290030 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:55:23.683222  290030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:55:23.727591  290030 cri.go:89] found id: ""
	I0214 21:55:23.727625  290030 logs.go:282] 0 containers: []
	W0214 21:55:23.727638  290030 logs.go:284] No container was found matching "kindnet"
	I0214 21:55:23.727652  290030 logs.go:123] Gathering logs for container status ...
	I0214 21:55:23.727669  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:55:23.783249  290030 logs.go:123] Gathering logs for kubelet ...
	I0214 21:55:23.783278  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:55:23.847415  290030 logs.go:123] Gathering logs for dmesg ...
	I0214 21:55:23.847443  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:55:23.862277  290030 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:55:23.862315  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:55:24.008635  290030 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:55:24.008659  290030 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:55:24.008674  290030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0214 21:55:24.122731  290030 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 21:55:24.122793  290030 out.go:270] * 
	* 
	W0214 21:55:24.122854  290030 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 21:55:24.122868  290030 out.go:270] * 
	* 
	W0214 21:55:24.123785  290030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 21:55:24.126783  290030 out.go:201] 
	W0214 21:55:24.127954  290030 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 21:55:24.128011  290030 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 21:55:24.128038  290030 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 21:55:24.129339  290030 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 6 (267.034686ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:55:24.446934  293631 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-201745" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-201745" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (333.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-201745 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-201745 create -f testdata/busybox.yaml: exit status 1 (46.33595ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-201745" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-201745 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 6 (259.19456ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:55:24.750674  293672 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-201745" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-201745" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 6 (239.780492ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:55:24.990481  293701 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-201745" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-201745" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (117.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-201745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-201745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.903859011s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-201745 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-201745 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-201745 describe deploy/metrics-server -n kube-system: exit status 1 (53.03827ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-201745" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-201745 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 6 (244.885895ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 21:57:22.193603  295920 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-201745" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-201745" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (117.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (523.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m42.085014361s)

                                                
                                                
-- stdout --
	* [old-k8s-version-201745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-201745" primary control-plane node in "old-k8s-version-201745" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-201745" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:57:26.806544  296043 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:57:26.806695  296043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:57:26.806706  296043 out.go:358] Setting ErrFile to fd 2...
	I0214 21:57:26.806710  296043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:57:26.806894  296043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:57:26.807516  296043 out.go:352] Setting JSON to false
	I0214 21:57:26.808685  296043 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9591,"bootTime":1739560656,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:57:26.808786  296043 start.go:140] virtualization: kvm guest
	I0214 21:57:26.810766  296043 out.go:177] * [old-k8s-version-201745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:57:26.811997  296043 notify.go:220] Checking for updates...
	I0214 21:57:26.812032  296043 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:57:26.813377  296043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:57:26.814649  296043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:57:26.815864  296043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:57:26.816947  296043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:57:26.818121  296043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:57:26.819650  296043 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:57:26.820064  296043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:57:26.820127  296043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:57:26.836201  296043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0214 21:57:26.836906  296043 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:57:26.837660  296043 main.go:141] libmachine: Using API Version  1
	I0214 21:57:26.837688  296043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:57:26.838098  296043 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:57:26.838337  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:26.839887  296043 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0214 21:57:26.840943  296043 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:57:26.841372  296043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:57:26.841445  296043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:57:26.864355  296043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0214 21:57:26.864874  296043 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:57:26.865408  296043 main.go:141] libmachine: Using API Version  1
	I0214 21:57:26.865434  296043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:57:26.865786  296043 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:57:26.865989  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:26.903617  296043 out.go:177] * Using the kvm2 driver based on existing profile
	I0214 21:57:26.904781  296043 start.go:304] selected driver: kvm2
	I0214 21:57:26.904802  296043 start.go:908] validating driver "kvm2" against &{Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 Clust
erName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:57:26.904962  296043 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:57:26.905995  296043 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:57:26.906115  296043 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 21:57:26.921829  296043 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 21:57:26.922207  296043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 21:57:26.922238  296043 cni.go:84] Creating CNI manager for ""
	I0214 21:57:26.922292  296043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:57:26.922349  296043 start.go:347] cluster config:
	{Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:57:26.922499  296043 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 21:57:26.924143  296043 out.go:177] * Starting "old-k8s-version-201745" primary control-plane node in "old-k8s-version-201745" cluster
	I0214 21:57:26.925292  296043 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:57:26.925335  296043 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0214 21:57:26.925347  296043 cache.go:56] Caching tarball of preloaded images
	I0214 21:57:26.925429  296043 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 21:57:26.925441  296043 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0214 21:57:26.925542  296043 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json ...
	I0214 21:57:26.925736  296043 start.go:360] acquireMachinesLock for old-k8s-version-201745: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 21:57:36.063087  296043 start.go:364] duration metric: took 9.13730903s to acquireMachinesLock for "old-k8s-version-201745"
	I0214 21:57:36.063134  296043 start.go:96] Skipping create...Using existing machine configuration
	I0214 21:57:36.063142  296043 fix.go:54] fixHost starting: 
	I0214 21:57:36.063563  296043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:57:36.063617  296043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:57:36.081279  296043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0214 21:57:36.081658  296043 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:57:36.082141  296043 main.go:141] libmachine: Using API Version  1
	I0214 21:57:36.082168  296043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:57:36.082490  296043 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:57:36.082708  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:36.082868  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetState
	I0214 21:57:36.084339  296043 fix.go:112] recreateIfNeeded on old-k8s-version-201745: state=Stopped err=<nil>
	I0214 21:57:36.084364  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	W0214 21:57:36.084518  296043 fix.go:138] unexpected machine state, will restart: <nil>
	I0214 21:57:36.086369  296043 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-201745" ...
	I0214 21:57:36.087671  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .Start
	I0214 21:57:36.087897  296043 main.go:141] libmachine: (old-k8s-version-201745) starting domain...
	I0214 21:57:36.087921  296043 main.go:141] libmachine: (old-k8s-version-201745) ensuring networks are active...
	I0214 21:57:36.088543  296043 main.go:141] libmachine: (old-k8s-version-201745) Ensuring network default is active
	I0214 21:57:36.088900  296043 main.go:141] libmachine: (old-k8s-version-201745) Ensuring network mk-old-k8s-version-201745 is active
	I0214 21:57:36.089403  296043 main.go:141] libmachine: (old-k8s-version-201745) getting domain XML...
	I0214 21:57:36.090156  296043 main.go:141] libmachine: (old-k8s-version-201745) creating domain...
	I0214 21:57:36.452007  296043 main.go:141] libmachine: (old-k8s-version-201745) waiting for IP...
	I0214 21:57:36.453018  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:36.453482  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:36.453547  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:36.453451  296162 retry.go:31] will retry after 226.387833ms: waiting for domain to come up
	I0214 21:57:36.682251  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:36.682990  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:36.683023  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:36.682944  296162 retry.go:31] will retry after 389.877154ms: waiting for domain to come up
	I0214 21:57:37.074504  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:37.074951  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:37.074978  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:37.074927  296162 retry.go:31] will retry after 349.170616ms: waiting for domain to come up
	I0214 21:57:37.425545  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:37.426084  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:37.426121  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:37.426035  296162 retry.go:31] will retry after 541.740659ms: waiting for domain to come up
	I0214 21:57:37.969731  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:37.970316  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:37.970355  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:37.970289  296162 retry.go:31] will retry after 577.574939ms: waiting for domain to come up
	I0214 21:57:38.549128  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:38.549778  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:38.549807  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:38.549759  296162 retry.go:31] will retry after 919.327951ms: waiting for domain to come up
	I0214 21:57:39.470707  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:39.471256  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:39.471285  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:39.471222  296162 retry.go:31] will retry after 931.781102ms: waiting for domain to come up
	I0214 21:57:40.405115  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:40.405681  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:40.405713  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:40.405636  296162 retry.go:31] will retry after 1.033198897s: waiting for domain to come up
	I0214 21:57:41.440264  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:41.440851  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:41.440884  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:41.440799  296162 retry.go:31] will retry after 1.654124613s: waiting for domain to come up
	I0214 21:57:43.097766  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:43.098291  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:43.098323  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:43.098247  296162 retry.go:31] will retry after 1.952207072s: waiting for domain to come up
	I0214 21:57:45.052286  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:45.052825  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:45.052854  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:45.052753  296162 retry.go:31] will retry after 2.127261985s: waiting for domain to come up
	I0214 21:57:47.181599  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:47.182039  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:47.182105  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:47.182052  296162 retry.go:31] will retry after 3.038066328s: waiting for domain to come up
	I0214 21:57:50.223911  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:50.224493  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:50.224524  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:50.224455  296162 retry.go:31] will retry after 3.843042282s: waiting for domain to come up
	I0214 21:57:54.068557  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:54.069032  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | unable to find current IP address of domain old-k8s-version-201745 in network mk-old-k8s-version-201745
	I0214 21:57:54.069060  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | I0214 21:57:54.069007  296162 retry.go:31] will retry after 3.608028732s: waiting for domain to come up
	I0214 21:57:57.678901  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.679493  296043 main.go:141] libmachine: (old-k8s-version-201745) found domain IP: 192.168.72.19
	I0214 21:57:57.679517  296043 main.go:141] libmachine: (old-k8s-version-201745) reserving static IP address...
	I0214 21:57:57.679543  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has current primary IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.679951  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "old-k8s-version-201745", mac: "52:54:00:6d:30:ba", ip: "192.168.72.19"} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:57.679987  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | skip adding static IP to network mk-old-k8s-version-201745 - found existing host DHCP lease matching {name: "old-k8s-version-201745", mac: "52:54:00:6d:30:ba", ip: "192.168.72.19"}
	I0214 21:57:57.680006  296043 main.go:141] libmachine: (old-k8s-version-201745) reserved static IP address 192.168.72.19 for domain old-k8s-version-201745
	I0214 21:57:57.680026  296043 main.go:141] libmachine: (old-k8s-version-201745) waiting for SSH...
	I0214 21:57:57.680042  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | Getting to WaitForSSH function...
	I0214 21:57:57.682675  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.683052  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:57.683081  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.683216  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH client type: external
	I0214 21:57:57.683242  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa (-rw-------)
	I0214 21:57:57.683287  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 21:57:57.683306  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | About to run SSH command:
	I0214 21:57:57.683317  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | exit 0
	I0214 21:57:57.815679  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | SSH cmd err, output: <nil>: 
	I0214 21:57:57.816012  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetConfigRaw
	I0214 21:57:57.816570  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:57:57.818856  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.991744  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:57.991777  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:57.992146  296043 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/config.json ...
	I0214 21:57:57.992404  296043 machine.go:93] provisionDockerMachine start ...
	I0214 21:57:57.992451  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:57.992737  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:58.409333  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.409655  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:58.409689  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.409828  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:58.410048  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.410232  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.410361  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:58.410513  296043 main.go:141] libmachine: Using SSH client type: native
	I0214 21:57:58.410764  296043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:57:58.410779  296043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0214 21:57:58.514439  296043 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0214 21:57:58.514465  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:57:58.514697  296043 buildroot.go:166] provisioning hostname "old-k8s-version-201745"
	I0214 21:57:58.514720  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:57:58.514864  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:58.517484  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.517929  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:58.517954  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.518138  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:58.518340  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.518490  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.518695  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:58.518890  296043 main.go:141] libmachine: Using SSH client type: native
	I0214 21:57:58.519051  296043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:57:58.519063  296043 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-201745 && echo "old-k8s-version-201745" | sudo tee /etc/hostname
	I0214 21:57:58.646070  296043 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-201745
	
	I0214 21:57:58.646105  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:58.648991  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.649438  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:58.649469  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.649636  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:58.649842  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.650008  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:58.650176  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:58.650378  296043 main.go:141] libmachine: Using SSH client type: native
	I0214 21:57:58.650653  296043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:57:58.650683  296043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-201745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-201745/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-201745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 21:57:58.769280  296043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 21:57:58.769305  296043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 21:57:58.769326  296043 buildroot.go:174] setting up certificates
	I0214 21:57:58.769356  296043 provision.go:84] configureAuth start
	I0214 21:57:58.769373  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetMachineName
	I0214 21:57:58.769689  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:57:58.772317  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.772763  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:58.772798  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.772993  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:58.775864  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.776218  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:58.776244  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:58.776417  296043 provision.go:143] copyHostCerts
	I0214 21:57:58.776474  296043 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 21:57:58.776492  296043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 21:57:58.776570  296043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 21:57:58.776690  296043 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 21:57:58.776702  296043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 21:57:58.776733  296043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 21:57:58.776819  296043 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 21:57:58.776829  296043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 21:57:58.776863  296043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 21:57:58.776948  296043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-201745 san=[127.0.0.1 192.168.72.19 localhost minikube old-k8s-version-201745]
	I0214 21:57:59.131273  296043 provision.go:177] copyRemoteCerts
	I0214 21:57:59.131346  296043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 21:57:59.131380  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.133940  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.134364  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.134410  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.134532  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.134773  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.134912  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.135083  296043 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:57:59.221011  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 21:57:59.244998  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0214 21:57:59.268477  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 21:57:59.291696  296043 provision.go:87] duration metric: took 522.322065ms to configureAuth
	I0214 21:57:59.291722  296043 buildroot.go:189] setting minikube options for container-runtime
	I0214 21:57:59.291888  296043 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:57:59.291977  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.294492  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.294854  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.294884  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.294986  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.295178  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.295364  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.295540  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.295731  296043 main.go:141] libmachine: Using SSH client type: native
	I0214 21:57:59.295943  296043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:57:59.295966  296043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 21:57:59.541265  296043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 21:57:59.541292  296043 machine.go:96] duration metric: took 1.548871659s to provisionDockerMachine
	I0214 21:57:59.541305  296043 start.go:293] postStartSetup for "old-k8s-version-201745" (driver="kvm2")
	I0214 21:57:59.541314  296043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 21:57:59.541354  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:59.541727  296043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 21:57:59.541768  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.544915  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.545283  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.545304  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.545474  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.545666  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.545802  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.545947  296043 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:57:59.625058  296043 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 21:57:59.629200  296043 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 21:57:59.629222  296043 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 21:57:59.629277  296043 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 21:57:59.629370  296043 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 21:57:59.629472  296043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 21:57:59.639391  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:57:59.663776  296043 start.go:296] duration metric: took 122.458277ms for postStartSetup
	I0214 21:57:59.663815  296043 fix.go:56] duration metric: took 23.60067309s for fixHost
	I0214 21:57:59.663839  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.666497  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.666852  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.666883  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.667062  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.667262  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.667410  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.667493  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.667592  296043 main.go:141] libmachine: Using SSH client type: native
	I0214 21:57:59.667767  296043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0214 21:57:59.667782  296043 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 21:57:59.775475  296043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570279.750474361
	
	I0214 21:57:59.775496  296043 fix.go:216] guest clock: 1739570279.750474361
	I0214 21:57:59.775506  296043 fix.go:229] Guest: 2025-02-14 21:57:59.750474361 +0000 UTC Remote: 2025-02-14 21:57:59.663820407 +0000 UTC m=+32.896845374 (delta=86.653954ms)
	I0214 21:57:59.775542  296043 fix.go:200] guest clock delta is within tolerance: 86.653954ms
	I0214 21:57:59.775553  296043 start.go:83] releasing machines lock for "old-k8s-version-201745", held for 23.712436895s
	I0214 21:57:59.775583  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:59.775774  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:57:59.778452  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.778857  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.778879  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.779029  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:59.779477  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:59.779662  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .DriverName
	I0214 21:57:59.779733  296043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 21:57:59.779776  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.779879  296043 ssh_runner.go:195] Run: cat /version.json
	I0214 21:57:59.779906  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHHostname
	I0214 21:57:59.782414  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.782652  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.782821  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.782875  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.782999  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.783136  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:57:59.783172  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:57:59.783174  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.783363  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHPort
	I0214 21:57:59.783367  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.783538  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHKeyPath
	I0214 21:57:59.783633  296043 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:57:59.783987  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetSSHUsername
	I0214 21:57:59.784122  296043 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/old-k8s-version-201745/id_rsa Username:docker}
	I0214 21:57:59.875958  296043 ssh_runner.go:195] Run: systemctl --version
	I0214 21:57:59.898875  296043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 21:58:00.055042  296043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 21:58:00.064191  296043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 21:58:00.064263  296043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 21:58:00.086697  296043 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 21:58:00.086725  296043 start.go:495] detecting cgroup driver to use...
	I0214 21:58:00.086801  296043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 21:58:00.107023  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 21:58:00.127856  296043 docker.go:217] disabling cri-docker service (if available) ...
	I0214 21:58:00.127917  296043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 21:58:00.151675  296043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 21:58:00.175578  296043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 21:58:00.318749  296043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 21:58:00.479854  296043 docker.go:233] disabling docker service ...
	I0214 21:58:00.479907  296043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 21:58:00.495930  296043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 21:58:00.511332  296043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 21:58:00.676191  296043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 21:58:00.808405  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 21:58:00.824779  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 21:58:00.846756  296043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0214 21:58:00.846826  296043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:58:00.860665  296043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 21:58:00.860724  296043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:58:00.874233  296043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:58:00.887837  296043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 21:58:00.898053  296043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 21:58:00.913174  296043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 21:58:00.927103  296043 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 21:58:00.927161  296043 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 21:58:00.943685  296043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 21:58:00.956183  296043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:58:01.097863  296043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 21:58:01.199737  296043 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 21:58:01.199822  296043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 21:58:01.205331  296043 start.go:563] Will wait 60s for crictl version
	I0214 21:58:01.205393  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:01.209711  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 21:58:01.258117  296043 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 21:58:01.258187  296043 ssh_runner.go:195] Run: crio --version
	I0214 21:58:01.291368  296043 ssh_runner.go:195] Run: crio --version
	I0214 21:58:01.323106  296043 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0214 21:58:01.324295  296043 main.go:141] libmachine: (old-k8s-version-201745) Calling .GetIP
	I0214 21:58:01.327552  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:58:01.327981  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:30:ba", ip: ""} in network mk-old-k8s-version-201745: {Iface:virbr4 ExpiryTime:2025-02-14 22:51:10 +0000 UTC Type:0 Mac:52:54:00:6d:30:ba Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:old-k8s-version-201745 Clientid:01:52:54:00:6d:30:ba}
	I0214 21:58:01.328015  296043 main.go:141] libmachine: (old-k8s-version-201745) DBG | domain old-k8s-version-201745 has defined IP address 192.168.72.19 and MAC address 52:54:00:6d:30:ba in network mk-old-k8s-version-201745
	I0214 21:58:01.328176  296043 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0214 21:58:01.332809  296043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:58:01.349099  296043 kubeadm.go:875] updating cluster {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-
version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 21:58:01.349283  296043 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 21:58:01.349363  296043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:58:01.406956  296043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:58:01.407037  296043 ssh_runner.go:195] Run: which lz4
	I0214 21:58:01.411786  296043 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 21:58:01.416820  296043 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 21:58:01.416854  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0214 21:58:03.128366  296043 crio.go:462] duration metric: took 1.716615695s to copy over tarball
	I0214 21:58:03.128454  296043 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 21:58:06.509917  296043 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.381413351s)
	I0214 21:58:06.509950  296043 crio.go:469] duration metric: took 3.381548105s to extract the tarball
	I0214 21:58:06.509961  296043 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 21:58:06.561435  296043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 21:58:06.604081  296043 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0214 21:58:06.604110  296043 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 21:58:06.604161  296043 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:06.604209  296043 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:06.604197  296043 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0214 21:58:06.604238  296043 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0214 21:58:06.604270  296043 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:06.604481  296043 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:06.604517  296043 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:06.604161  296043 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:58:06.605828  296043 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:06.605855  296043 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:06.605828  296043 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0214 21:58:06.605838  296043 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:06.606238  296043 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:06.606254  296043 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:06.606332  296043 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:58:06.606446  296043 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 21:58:06.770455  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:06.770884  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:06.780719  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:06.784604  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:06.800805  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0214 21:58:06.811529  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:06.813555  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0214 21:58:06.965437  296043 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0214 21:58:06.965467  296043 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0214 21:58:06.965492  296043 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:06.965507  296043 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:06.965553  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:06.965555  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:06.994907  296043 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0214 21:58:06.994948  296043 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:06.995006  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:07.010962  296043 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0214 21:58:07.011006  296043 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:07.011046  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:07.037185  296043 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0214 21:58:07.037228  296043 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:07.037275  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:07.037275  296043 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0214 21:58:07.037293  296043 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0214 21:58:07.037336  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:07.040244  296043 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0214 21:58:07.040286  296043 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0214 21:58:07.040291  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:07.040330  296043 ssh_runner.go:195] Run: which crictl
	I0214 21:58:07.040394  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:07.040446  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:07.040475  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:07.045981  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:58:07.046075  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:07.177977  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:07.178058  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:07.178063  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:58:07.178132  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:07.178227  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:07.184207  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:58:07.196014  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:07.351916  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0214 21:58:07.351926  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0214 21:58:07.352002  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:58:07.374855  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0214 21:58:07.374891  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0214 21:58:07.374986  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0214 21:58:07.378477  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0214 21:58:07.487517  296043 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 21:58:07.540514  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0214 21:58:07.540590  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0214 21:58:07.540670  296043 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0214 21:58:07.540805  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0214 21:58:07.573623  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0214 21:58:07.573692  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0214 21:58:07.573721  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0214 21:58:07.692797  296043 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0214 21:58:07.692862  296043 cache_images.go:92] duration metric: took 1.088733479s to LoadCachedImages
	W0214 21:58:07.692949  296043 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20315-243456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0214 21:58:07.692969  296043 kubeadm.go:926] updating node { 192.168.72.19 8443 v1.20.0 crio true true} ...
	I0214 21:58:07.693097  296043 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-201745 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0214 21:58:07.693182  296043 ssh_runner.go:195] Run: crio config
	I0214 21:58:07.747013  296043 cni.go:84] Creating CNI manager for ""
	I0214 21:58:07.747045  296043 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 21:58:07.747058  296043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 21:58:07.747085  296043 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-201745 NodeName:old-k8s-version-201745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 21:58:07.747284  296043 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-201745"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 21:58:07.747361  296043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0214 21:58:07.757572  296043 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 21:58:07.757635  296043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 21:58:07.768572  296043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0214 21:58:07.788798  296043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 21:58:07.806715  296043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0214 21:58:07.826424  296043 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0214 21:58:07.830379  296043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 21:58:07.843188  296043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 21:58:07.964619  296043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 21:58:07.983629  296043 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745 for IP: 192.168.72.19
	I0214 21:58:07.983656  296043 certs.go:194] generating shared ca certs ...
	I0214 21:58:07.983677  296043 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:58:07.983897  296043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 21:58:07.983962  296043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 21:58:07.983977  296043 certs.go:256] generating profile certs ...
	I0214 21:58:07.984111  296043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/client.key
	I0214 21:58:08.065809  296043 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key.0d7fe282
	I0214 21:58:08.065915  296043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key
	I0214 21:58:08.066099  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 21:58:08.066153  296043 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 21:58:08.066167  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 21:58:08.066188  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 21:58:08.066211  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 21:58:08.066301  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 21:58:08.066371  296043 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 21:58:08.067164  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 21:58:08.105981  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 21:58:08.148375  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 21:58:08.182185  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 21:58:08.220282  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0214 21:58:08.246878  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 21:58:08.272004  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 21:58:08.299011  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/old-k8s-version-201745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 21:58:08.327624  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 21:58:08.355979  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 21:58:08.389396  296043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 21:58:08.420580  296043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 21:58:08.444038  296043 ssh_runner.go:195] Run: openssl version
	I0214 21:58:08.450496  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 21:58:08.461221  296043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:58:08.466765  296043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:58:08.466802  296043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 21:58:08.472984  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 21:58:08.483635  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 21:58:08.494502  296043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 21:58:08.499056  296043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 21:58:08.499103  296043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 21:58:08.504661  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 21:58:08.514827  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 21:58:08.525057  296043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 21:58:08.529499  296043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 21:58:08.529544  296043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 21:58:08.535537  296043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 21:58:08.546472  296043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 21:58:08.551296  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0214 21:58:08.558566  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0214 21:58:08.566302  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0214 21:58:08.572995  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0214 21:58:08.579129  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0214 21:58:08.585605  296043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0214 21:58:08.592479  296043 kubeadm.go:392] StartCluster: {Name:old-k8s-version-201745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-ver
sion-201745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 21:58:08.592569  296043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 21:58:08.592632  296043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:58:08.641248  296043 cri.go:89] found id: ""
	I0214 21:58:08.641313  296043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 21:58:08.653942  296043 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0214 21:58:08.653961  296043 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0214 21:58:08.654013  296043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0214 21:58:08.664976  296043 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0214 21:58:08.665896  296043 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-201745" does not appear in /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:58:08.666435  296043 kubeconfig.go:62] /home/jenkins/minikube-integration/20315-243456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-201745" cluster setting kubeconfig missing "old-k8s-version-201745" context setting]
	I0214 21:58:08.667140  296043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 21:58:08.668904  296043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0214 21:58:08.678548  296043 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.72.19
	I0214 21:58:08.678577  296043 kubeadm.go:1152] stopping kube-system containers ...
	I0214 21:58:08.678592  296043 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0214 21:58:08.678674  296043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 21:58:08.724378  296043 cri.go:89] found id: ""
	I0214 21:58:08.724438  296043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0214 21:58:08.743221  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 21:58:08.752881  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 21:58:08.752900  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 21:58:08.752957  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 21:58:08.761830  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 21:58:08.761881  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 21:58:08.770974  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 21:58:08.779713  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 21:58:08.779758  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 21:58:08.788903  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 21:58:08.799497  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 21:58:08.799552  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 21:58:08.813085  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 21:58:08.826410  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 21:58:08.826473  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 21:58:08.840030  296043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 21:58:08.854003  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:58:09.019508  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:58:10.264911  296043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.245370249s)
	I0214 21:58:10.264944  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:58:10.515157  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:58:10.625133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0214 21:58:10.731682  296043 api_server.go:52] waiting for apiserver process to appear ...
	I0214 21:58:10.731768  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:11.231904  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:11.732111  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:12.232556  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:12.731863  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:13.232081  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:13.732595  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:14.232353  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:14.732154  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:15.232693  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:15.732476  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:16.231871  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:16.732052  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:17.232042  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:17.732730  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:18.232553  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:18.732752  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:19.232351  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:19.731920  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:20.232174  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:20.732726  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:21.232297  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:21.732492  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:22.232171  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:22.731950  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:23.232852  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:23.732685  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:24.231963  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:24.731815  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:25.232724  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:25.732012  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:26.231910  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:26.731974  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:27.232393  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:27.732670  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:28.232598  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:28.732788  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:29.232456  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:29.732656  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:30.232502  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:30.732119  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:31.232528  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:31.732811  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:32.232497  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:32.732826  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:33.232196  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:33.732779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:34.231931  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:34.731935  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:35.232162  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:35.732318  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:36.232621  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:36.732531  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:37.232704  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:37.732614  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:38.232348  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:38.732836  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:39.232349  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:39.732148  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:40.232783  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:40.732811  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:41.231836  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:41.732164  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:42.232199  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:42.732111  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:43.232818  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:43.732505  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:44.232223  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:44.732604  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:45.231891  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:45.732444  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:46.232198  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:46.731962  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:47.232075  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:47.732467  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:48.231942  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:48.732251  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:49.232279  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:49.732398  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:50.232161  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:50.732732  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:51.232114  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:51.732037  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:52.232697  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:52.732818  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:53.232526  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:53.732727  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:54.231922  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:54.732553  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:55.232836  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:55.732768  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:56.232476  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:56.732018  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:57.231874  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:57.732592  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:58.232711  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:58.731857  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:59.231983  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:58:59.732787  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:00.232449  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:00.732789  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:01.232842  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:01.732504  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:02.231888  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:02.732413  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:03.232525  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:03.732576  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:04.231918  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:04.732814  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:05.232700  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:05.732547  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:06.231883  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:06.732641  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:07.231916  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:07.732646  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:08.232529  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:08.732885  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:09.232300  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:09.732626  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:10.232318  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:10.732185  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:10.732274  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:10.773196  296043 cri.go:89] found id: ""
	I0214 21:59:10.773221  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.773230  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:10.773236  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:10.773291  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:10.809462  296043 cri.go:89] found id: ""
	I0214 21:59:10.809485  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.809493  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:10.809499  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:10.809548  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:10.843476  296043 cri.go:89] found id: ""
	I0214 21:59:10.843496  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.843504  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:10.843509  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:10.843560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:10.878036  296043 cri.go:89] found id: ""
	I0214 21:59:10.878064  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.878075  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:10.878081  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:10.878153  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:10.910760  296043 cri.go:89] found id: ""
	I0214 21:59:10.910788  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.910799  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:10.910806  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:10.910868  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:10.943448  296043 cri.go:89] found id: ""
	I0214 21:59:10.943483  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.943495  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:10.943503  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:10.943578  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:10.984273  296043 cri.go:89] found id: ""
	I0214 21:59:10.984299  296043 logs.go:282] 0 containers: []
	W0214 21:59:10.984308  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:10.984313  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:10.984358  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:11.016445  296043 cri.go:89] found id: ""
	I0214 21:59:11.016465  296043 logs.go:282] 0 containers: []
	W0214 21:59:11.016473  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:11.016483  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:11.016494  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:11.143826  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:11.143848  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:11.143863  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:11.215974  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:11.216008  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:11.260154  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:11.260196  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:11.312671  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:11.312705  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:13.826776  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:13.844385  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:13.844464  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:13.899170  296043 cri.go:89] found id: ""
	I0214 21:59:13.899200  296043 logs.go:282] 0 containers: []
	W0214 21:59:13.899212  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:13.899245  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:13.899309  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:13.948225  296043 cri.go:89] found id: ""
	I0214 21:59:13.948258  296043 logs.go:282] 0 containers: []
	W0214 21:59:13.948270  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:13.948278  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:13.948349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:13.991747  296043 cri.go:89] found id: ""
	I0214 21:59:13.991780  296043 logs.go:282] 0 containers: []
	W0214 21:59:13.991791  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:13.991799  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:13.991867  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:14.039369  296043 cri.go:89] found id: ""
	I0214 21:59:14.039398  296043 logs.go:282] 0 containers: []
	W0214 21:59:14.039411  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:14.039419  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:14.039489  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:14.088485  296043 cri.go:89] found id: ""
	I0214 21:59:14.088509  296043 logs.go:282] 0 containers: []
	W0214 21:59:14.088517  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:14.088523  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:14.088582  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:14.147779  296043 cri.go:89] found id: ""
	I0214 21:59:14.147809  296043 logs.go:282] 0 containers: []
	W0214 21:59:14.147821  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:14.147829  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:14.147892  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:14.186266  296043 cri.go:89] found id: ""
	I0214 21:59:14.186299  296043 logs.go:282] 0 containers: []
	W0214 21:59:14.186308  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:14.186314  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:14.186371  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:14.225125  296043 cri.go:89] found id: ""
	I0214 21:59:14.225155  296043 logs.go:282] 0 containers: []
	W0214 21:59:14.225167  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:14.225178  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:14.225190  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:14.275144  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:14.275177  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:14.325999  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:14.326036  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:14.340953  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:14.340988  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:14.410541  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:14.410569  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:14.410588  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:17.013888  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:17.028177  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:17.028255  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:17.072494  296043 cri.go:89] found id: ""
	I0214 21:59:17.072526  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.072540  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:17.072548  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:17.072610  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:17.121387  296043 cri.go:89] found id: ""
	I0214 21:59:17.121406  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.121414  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:17.121421  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:17.121478  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:17.156416  296043 cri.go:89] found id: ""
	I0214 21:59:17.156445  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.156457  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:17.156469  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:17.156535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:17.191398  296043 cri.go:89] found id: ""
	I0214 21:59:17.191428  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.191439  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:17.191445  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:17.191509  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:17.228153  296043 cri.go:89] found id: ""
	I0214 21:59:17.228181  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.228192  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:17.228200  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:17.228258  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:17.263908  296043 cri.go:89] found id: ""
	I0214 21:59:17.263939  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.263949  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:17.263957  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:17.264010  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:17.303228  296043 cri.go:89] found id: ""
	I0214 21:59:17.303258  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.303269  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:17.303277  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:17.303340  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:17.339259  296043 cri.go:89] found id: ""
	I0214 21:59:17.339302  296043 logs.go:282] 0 containers: []
	W0214 21:59:17.339314  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:17.339335  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:17.339353  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:17.422771  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:17.422799  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:17.471115  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:17.471147  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:17.532951  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:17.532984  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:17.550258  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:17.550301  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:17.629853  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:20.130779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:20.145428  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:20.145512  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:20.192381  296043 cri.go:89] found id: ""
	I0214 21:59:20.192408  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.192419  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:20.192427  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:20.192489  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:20.231853  296043 cri.go:89] found id: ""
	I0214 21:59:20.231881  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.231891  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:20.231903  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:20.231963  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:20.279193  296043 cri.go:89] found id: ""
	I0214 21:59:20.279223  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.279234  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:20.279241  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:20.279305  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:20.321804  296043 cri.go:89] found id: ""
	I0214 21:59:20.321835  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.321846  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:20.321854  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:20.321920  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:20.372258  296043 cri.go:89] found id: ""
	I0214 21:59:20.372290  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.372301  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:20.372321  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:20.372382  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:20.418605  296043 cri.go:89] found id: ""
	I0214 21:59:20.418655  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.418667  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:20.418682  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:20.418745  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:20.465011  296043 cri.go:89] found id: ""
	I0214 21:59:20.465039  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.465057  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:20.465064  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:20.465131  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:20.508754  296043 cri.go:89] found id: ""
	I0214 21:59:20.508792  296043 logs.go:282] 0 containers: []
	W0214 21:59:20.508805  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:20.508819  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:20.508833  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:20.613033  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:20.613069  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:20.678921  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:20.678948  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:20.748738  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:20.748772  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:20.765078  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:20.765119  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:20.869700  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:23.370775  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:23.386680  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:23.386739  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:23.437912  296043 cri.go:89] found id: ""
	I0214 21:59:23.438374  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.438391  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:23.438400  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:23.438465  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:23.479914  296043 cri.go:89] found id: ""
	I0214 21:59:23.479942  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.479954  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:23.479962  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:23.480026  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:23.519181  296043 cri.go:89] found id: ""
	I0214 21:59:23.519212  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.519223  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:23.519232  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:23.519293  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:23.563826  296043 cri.go:89] found id: ""
	I0214 21:59:23.563859  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.563872  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:23.563881  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:23.563939  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:23.608647  296043 cri.go:89] found id: ""
	I0214 21:59:23.608692  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.608708  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:23.608716  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:23.608784  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:23.646845  296043 cri.go:89] found id: ""
	I0214 21:59:23.646883  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.646896  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:23.646905  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:23.646974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:23.689230  296043 cri.go:89] found id: ""
	I0214 21:59:23.689262  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.689274  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:23.689281  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:23.689362  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:23.727951  296043 cri.go:89] found id: ""
	I0214 21:59:23.727985  296043 logs.go:282] 0 containers: []
	W0214 21:59:23.727998  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:23.728012  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:23.728029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:23.825652  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:23.825708  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:23.825731  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:23.907149  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:23.907182  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:23.952598  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:23.952637  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:24.012946  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:24.012980  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:26.528329  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:26.552442  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:26.552528  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:26.650125  296043 cri.go:89] found id: ""
	I0214 21:59:26.650230  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.650257  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:26.650277  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:26.650378  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:26.691843  296043 cri.go:89] found id: ""
	I0214 21:59:26.691889  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.691900  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:26.691908  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:26.691978  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:26.744302  296043 cri.go:89] found id: ""
	I0214 21:59:26.744399  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.744423  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:26.744441  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:26.744540  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:26.795086  296043 cri.go:89] found id: ""
	I0214 21:59:26.795122  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.795137  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:26.795146  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:26.795232  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:26.835857  296043 cri.go:89] found id: ""
	I0214 21:59:26.835891  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.835915  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:26.836034  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:26.836115  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:26.879738  296043 cri.go:89] found id: ""
	I0214 21:59:26.879764  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.879777  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:26.879786  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:26.879846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:26.928382  296043 cri.go:89] found id: ""
	I0214 21:59:26.928406  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.928418  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:26.928426  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:26.928487  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:26.968314  296043 cri.go:89] found id: ""
	I0214 21:59:26.968341  296043 logs.go:282] 0 containers: []
	W0214 21:59:26.968357  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:26.968371  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:26.968386  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:27.092443  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:27.092472  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:27.092490  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:27.182140  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:27.182177  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:27.235068  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:27.235114  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:27.296153  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:27.296189  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:29.811118  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:29.826131  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:29.826206  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:29.864944  296043 cri.go:89] found id: ""
	I0214 21:59:29.864975  296043 logs.go:282] 0 containers: []
	W0214 21:59:29.864986  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:29.865001  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:29.865068  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:29.916512  296043 cri.go:89] found id: ""
	I0214 21:59:29.916545  296043 logs.go:282] 0 containers: []
	W0214 21:59:29.916557  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:29.916565  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:29.916624  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:29.955819  296043 cri.go:89] found id: ""
	I0214 21:59:29.955852  296043 logs.go:282] 0 containers: []
	W0214 21:59:29.955865  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:29.955874  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:29.955935  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:30.001701  296043 cri.go:89] found id: ""
	I0214 21:59:30.001728  296043 logs.go:282] 0 containers: []
	W0214 21:59:30.001742  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:30.001750  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:30.001804  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:30.055466  296043 cri.go:89] found id: ""
	I0214 21:59:30.055495  296043 logs.go:282] 0 containers: []
	W0214 21:59:30.055513  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:30.055521  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:30.055591  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:30.101743  296043 cri.go:89] found id: ""
	I0214 21:59:30.101773  296043 logs.go:282] 0 containers: []
	W0214 21:59:30.101784  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:30.101791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:30.101853  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:30.152059  296043 cri.go:89] found id: ""
	I0214 21:59:30.152084  296043 logs.go:282] 0 containers: []
	W0214 21:59:30.152093  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:30.152101  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:30.152154  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:30.190719  296043 cri.go:89] found id: ""
	I0214 21:59:30.190747  296043 logs.go:282] 0 containers: []
	W0214 21:59:30.190758  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:30.190772  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:30.190787  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:30.210300  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:30.210351  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:30.305655  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:30.305677  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:30.305692  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:30.404951  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:30.405029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:30.449056  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:30.449083  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:33.013761  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:33.030072  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:33.030138  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:33.074849  296043 cri.go:89] found id: ""
	I0214 21:59:33.074879  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.074889  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:33.074897  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:33.074957  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:33.140330  296043 cri.go:89] found id: ""
	I0214 21:59:33.140365  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.140376  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:33.140385  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:33.140441  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:33.187683  296043 cri.go:89] found id: ""
	I0214 21:59:33.187712  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.187724  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:33.187731  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:33.187795  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:33.230754  296043 cri.go:89] found id: ""
	I0214 21:59:33.230775  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.230784  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:33.230790  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:33.230833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:33.271192  296043 cri.go:89] found id: ""
	I0214 21:59:33.271220  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.271232  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:33.271240  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:33.271292  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:33.311534  296043 cri.go:89] found id: ""
	I0214 21:59:33.311556  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.311564  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:33.311570  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:33.311614  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:33.352319  296043 cri.go:89] found id: ""
	I0214 21:59:33.352352  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.352364  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:33.352371  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:33.352424  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:33.395517  296043 cri.go:89] found id: ""
	I0214 21:59:33.395545  296043 logs.go:282] 0 containers: []
	W0214 21:59:33.395565  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:33.395579  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:33.395597  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:33.410928  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:33.410966  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:33.510200  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:33.510228  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:33.510243  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:33.633876  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:33.633918  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:33.684696  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:33.684726  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:36.251523  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:36.267013  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:36.267075  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:36.307514  296043 cri.go:89] found id: ""
	I0214 21:59:36.307557  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.307568  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:36.307577  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:36.307652  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:36.347254  296043 cri.go:89] found id: ""
	I0214 21:59:36.347288  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.347300  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:36.347308  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:36.347381  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:36.386777  296043 cri.go:89] found id: ""
	I0214 21:59:36.386806  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.386817  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:36.386826  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:36.386896  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:36.424591  296043 cri.go:89] found id: ""
	I0214 21:59:36.424617  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.424628  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:36.424637  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:36.424699  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:36.465540  296043 cri.go:89] found id: ""
	I0214 21:59:36.465559  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.465566  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:36.465571  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:36.465612  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:36.509116  296043 cri.go:89] found id: ""
	I0214 21:59:36.509136  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.509145  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:36.509150  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:36.509204  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:36.555105  296043 cri.go:89] found id: ""
	I0214 21:59:36.555131  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.555142  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:36.555150  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:36.555217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:36.599516  296043 cri.go:89] found id: ""
	I0214 21:59:36.599548  296043 logs.go:282] 0 containers: []
	W0214 21:59:36.599559  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:36.599573  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:36.599594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:36.657600  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:36.657635  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:36.677476  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:36.677525  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:36.806195  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:36.806238  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:36.806255  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:36.922914  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:36.922965  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:39.485033  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:39.510077  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:39.510152  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:39.566583  296043 cri.go:89] found id: ""
	I0214 21:59:39.566610  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.566638  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:39.566647  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:39.566703  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:39.619700  296043 cri.go:89] found id: ""
	I0214 21:59:39.619726  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.619740  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:39.619748  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:39.619814  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:39.669074  296043 cri.go:89] found id: ""
	I0214 21:59:39.669100  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.669112  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:39.669119  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:39.669176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:39.718887  296043 cri.go:89] found id: ""
	I0214 21:59:39.718918  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.718930  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:39.718939  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:39.719006  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:39.778446  296043 cri.go:89] found id: ""
	I0214 21:59:39.778499  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.778511  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:39.778519  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:39.778582  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:39.817325  296043 cri.go:89] found id: ""
	I0214 21:59:39.817359  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.817372  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:39.817381  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:39.817445  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:39.860014  296043 cri.go:89] found id: ""
	I0214 21:59:39.860058  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.860070  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:39.860079  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:39.860148  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:39.906231  296043 cri.go:89] found id: ""
	I0214 21:59:39.906253  296043 logs.go:282] 0 containers: []
	W0214 21:59:39.906262  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:39.906271  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:39.906288  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:39.989177  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:39.989199  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:39.989215  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:40.066385  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:40.066415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:40.115349  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:40.115372  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:40.174785  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:40.174818  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:42.692209  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:42.709679  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:42.709743  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:42.750225  296043 cri.go:89] found id: ""
	I0214 21:59:42.750256  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.750268  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:42.750276  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:42.750337  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:42.786412  296043 cri.go:89] found id: ""
	I0214 21:59:42.786444  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.786455  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:42.786465  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:42.786528  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:42.826435  296043 cri.go:89] found id: ""
	I0214 21:59:42.826467  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.826479  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:42.826491  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:42.826554  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:42.867725  296043 cri.go:89] found id: ""
	I0214 21:59:42.867756  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.867767  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:42.867774  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:42.867840  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:42.905906  296043 cri.go:89] found id: ""
	I0214 21:59:42.905936  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.905948  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:42.905955  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:42.906018  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:42.949028  296043 cri.go:89] found id: ""
	I0214 21:59:42.949059  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.949070  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:42.949079  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:42.949146  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:42.992706  296043 cri.go:89] found id: ""
	I0214 21:59:42.992733  296043 logs.go:282] 0 containers: []
	W0214 21:59:42.992745  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:42.992753  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:42.992814  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:43.029088  296043 cri.go:89] found id: ""
	I0214 21:59:43.029122  296043 logs.go:282] 0 containers: []
	W0214 21:59:43.029155  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:43.029169  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:43.029184  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:43.073745  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:43.073783  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:43.143974  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:43.144017  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:43.161306  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:43.161343  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:43.244925  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:43.244951  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:43.244969  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:45.861329  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:45.875819  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:45.875888  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:45.912083  296043 cri.go:89] found id: ""
	I0214 21:59:45.912112  296043 logs.go:282] 0 containers: []
	W0214 21:59:45.912125  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:45.912133  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:45.912199  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:45.944988  296043 cri.go:89] found id: ""
	I0214 21:59:45.945016  296043 logs.go:282] 0 containers: []
	W0214 21:59:45.945028  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:45.945035  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:45.945108  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:45.986269  296043 cri.go:89] found id: ""
	I0214 21:59:45.986296  296043 logs.go:282] 0 containers: []
	W0214 21:59:45.986308  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:45.986316  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:45.986376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:46.025609  296043 cri.go:89] found id: ""
	I0214 21:59:46.025635  296043 logs.go:282] 0 containers: []
	W0214 21:59:46.025649  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:46.025657  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:46.025711  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:46.066078  296043 cri.go:89] found id: ""
	I0214 21:59:46.066119  296043 logs.go:282] 0 containers: []
	W0214 21:59:46.066129  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:46.066135  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:46.066192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:46.100747  296043 cri.go:89] found id: ""
	I0214 21:59:46.100789  296043 logs.go:282] 0 containers: []
	W0214 21:59:46.100803  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:46.100811  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:46.100875  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:46.136957  296043 cri.go:89] found id: ""
	I0214 21:59:46.136989  296043 logs.go:282] 0 containers: []
	W0214 21:59:46.137001  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:46.137007  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:46.137072  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:46.176769  296043 cri.go:89] found id: ""
	I0214 21:59:46.176795  296043 logs.go:282] 0 containers: []
	W0214 21:59:46.176804  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:46.176814  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:46.176827  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:46.226592  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:46.226636  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:46.240695  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:46.240719  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:46.327280  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:46.327307  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:46.327324  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:46.407724  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:46.407757  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:48.950689  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:48.969544  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:48.969617  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:49.013977  296043 cri.go:89] found id: ""
	I0214 21:59:49.014013  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.014025  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:49.014033  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:49.014091  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:49.064907  296043 cri.go:89] found id: ""
	I0214 21:59:49.064950  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.064962  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:49.064970  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:49.065032  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:49.116099  296043 cri.go:89] found id: ""
	I0214 21:59:49.116139  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.116150  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:49.116159  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:49.116226  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:49.169815  296043 cri.go:89] found id: ""
	I0214 21:59:49.169845  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.169857  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:49.169865  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:49.169924  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:49.210956  296043 cri.go:89] found id: ""
	I0214 21:59:49.210986  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.210999  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:49.211006  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:49.211077  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:49.258995  296043 cri.go:89] found id: ""
	I0214 21:59:49.259031  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.259044  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:49.259053  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:49.259122  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:49.303035  296043 cri.go:89] found id: ""
	I0214 21:59:49.303073  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.303085  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:49.303094  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:49.303159  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:49.352831  296043 cri.go:89] found id: ""
	I0214 21:59:49.352868  296043 logs.go:282] 0 containers: []
	W0214 21:59:49.352880  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:49.352894  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:49.352910  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:49.460760  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:49.460800  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:49.513980  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:49.514020  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:49.572708  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:49.572751  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:49.588815  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:49.588852  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:49.669216  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:52.170433  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:52.189428  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:52.189507  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:52.231961  296043 cri.go:89] found id: ""
	I0214 21:59:52.232007  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.232020  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:52.232040  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:52.232133  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:52.272771  296043 cri.go:89] found id: ""
	I0214 21:59:52.272808  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.272822  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:52.272833  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:52.272900  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:52.313498  296043 cri.go:89] found id: ""
	I0214 21:59:52.313536  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.313549  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:52.313556  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:52.313627  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:52.352862  296043 cri.go:89] found id: ""
	I0214 21:59:52.352891  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.352902  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:52.352916  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:52.352983  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:52.398147  296043 cri.go:89] found id: ""
	I0214 21:59:52.398182  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.398195  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:52.398202  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:52.398273  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:52.444031  296043 cri.go:89] found id: ""
	I0214 21:59:52.444066  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.444077  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:52.444085  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:52.444162  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:52.488989  296043 cri.go:89] found id: ""
	I0214 21:59:52.489021  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.489037  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:52.489044  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:52.489099  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:52.530614  296043 cri.go:89] found id: ""
	I0214 21:59:52.530665  296043 logs.go:282] 0 containers: []
	W0214 21:59:52.530678  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:52.530691  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:52.530710  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:52.603093  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:52.603122  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:52.603148  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:52.722116  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:52.722170  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:52.767309  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:52.767345  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:52.821421  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:52.821456  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:55.336249  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:55.349933  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:55.350017  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:55.384563  296043 cri.go:89] found id: ""
	I0214 21:59:55.384587  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.384598  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:55.384605  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:55.384664  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:55.418162  296043 cri.go:89] found id: ""
	I0214 21:59:55.418191  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.418203  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:55.418210  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:55.418273  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:55.470120  296043 cri.go:89] found id: ""
	I0214 21:59:55.470149  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.470161  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:55.470170  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:55.470235  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:55.503420  296043 cri.go:89] found id: ""
	I0214 21:59:55.503443  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.503451  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:55.503456  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:55.503510  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:55.535050  296043 cri.go:89] found id: ""
	I0214 21:59:55.535076  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.535097  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:55.535104  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:55.535162  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:55.566950  296043 cri.go:89] found id: ""
	I0214 21:59:55.566969  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.566978  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:55.566986  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:55.567043  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:55.603126  296043 cri.go:89] found id: ""
	I0214 21:59:55.603152  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.603161  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:55.603167  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:55.603217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:55.637254  296043 cri.go:89] found id: ""
	I0214 21:59:55.637276  296043 logs.go:282] 0 containers: []
	W0214 21:59:55.637290  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:55.637302  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:55.637318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:55.686606  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:55.686699  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:55.699922  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:55.699946  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:55.779771  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:55.779795  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:55.779822  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:55.860170  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:55.860200  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 21:59:58.403807  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:59:58.417921  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 21:59:58.417989  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 21:59:58.458061  296043 cri.go:89] found id: ""
	I0214 21:59:58.458092  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.458101  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 21:59:58.458107  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 21:59:58.458149  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 21:59:58.495465  296043 cri.go:89] found id: ""
	I0214 21:59:58.495496  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.495508  296043 logs.go:284] No container was found matching "etcd"
	I0214 21:59:58.495518  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 21:59:58.495575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 21:59:58.535310  296043 cri.go:89] found id: ""
	I0214 21:59:58.535333  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.535341  296043 logs.go:284] No container was found matching "coredns"
	I0214 21:59:58.535348  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 21:59:58.535402  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 21:59:58.575746  296043 cri.go:89] found id: ""
	I0214 21:59:58.575772  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.575782  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 21:59:58.575790  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 21:59:58.575850  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 21:59:58.613649  296043 cri.go:89] found id: ""
	I0214 21:59:58.613692  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.613703  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 21:59:58.613711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 21:59:58.613781  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 21:59:58.656100  296043 cri.go:89] found id: ""
	I0214 21:59:58.656125  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.656134  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 21:59:58.656140  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 21:59:58.656186  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 21:59:58.695626  296043 cri.go:89] found id: ""
	I0214 21:59:58.695652  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.695661  296043 logs.go:284] No container was found matching "kindnet"
	I0214 21:59:58.695667  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 21:59:58.695734  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 21:59:58.734010  296043 cri.go:89] found id: ""
	I0214 21:59:58.734041  296043 logs.go:282] 0 containers: []
	W0214 21:59:58.734052  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 21:59:58.734064  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 21:59:58.734079  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 21:59:58.791609  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 21:59:58.791639  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 21:59:58.807617  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 21:59:58.807642  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 21:59:58.887958  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 21:59:58.887978  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 21:59:58.887995  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 21:59:58.968232  296043 logs.go:123] Gathering logs for container status ...
	I0214 21:59:58.968264  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:01.504677  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:01.518257  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:01.518322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:01.559101  296043 cri.go:89] found id: ""
	I0214 22:00:01.559132  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.559143  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:01.559151  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:01.559206  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:01.602475  296043 cri.go:89] found id: ""
	I0214 22:00:01.602507  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.602517  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:01.602522  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:01.602592  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:01.643080  296043 cri.go:89] found id: ""
	I0214 22:00:01.643105  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.643112  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:01.643118  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:01.643172  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:01.685472  296043 cri.go:89] found id: ""
	I0214 22:00:01.685493  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.685503  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:01.685511  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:01.685555  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:01.727572  296043 cri.go:89] found id: ""
	I0214 22:00:01.727593  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.727600  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:01.727605  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:01.727654  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:01.770671  296043 cri.go:89] found id: ""
	I0214 22:00:01.770718  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.770729  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:01.770737  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:01.770788  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:01.806977  296043 cri.go:89] found id: ""
	I0214 22:00:01.807002  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.807012  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:01.807018  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:01.807061  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:01.843579  296043 cri.go:89] found id: ""
	I0214 22:00:01.843600  296043 logs.go:282] 0 containers: []
	W0214 22:00:01.843608  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:01.843618  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:01.843629  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:01.894632  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:01.894655  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:01.907198  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:01.907221  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:01.975128  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:01.975147  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:01.975161  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:02.071926  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:02.071958  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:04.626707  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:04.641714  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:04.641775  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:04.678544  296043 cri.go:89] found id: ""
	I0214 22:00:04.678569  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.678576  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:04.678583  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:04.678644  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:04.711646  296043 cri.go:89] found id: ""
	I0214 22:00:04.711667  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.711674  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:04.711680  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:04.711722  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:04.747858  296043 cri.go:89] found id: ""
	I0214 22:00:04.747876  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.747886  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:04.747894  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:04.747976  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:04.784711  296043 cri.go:89] found id: ""
	I0214 22:00:04.784737  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.784745  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:04.784752  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:04.784803  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:04.826170  296043 cri.go:89] found id: ""
	I0214 22:00:04.826193  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.826203  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:04.826211  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:04.826263  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:04.865418  296043 cri.go:89] found id: ""
	I0214 22:00:04.865441  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.865451  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:04.865459  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:04.865516  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:04.912320  296043 cri.go:89] found id: ""
	I0214 22:00:04.912350  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.912361  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:04.912371  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:04.912436  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:04.950653  296043 cri.go:89] found id: ""
	I0214 22:00:04.950681  296043 logs.go:282] 0 containers: []
	W0214 22:00:04.950692  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:04.950704  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:04.950719  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:05.003782  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:05.003810  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:05.021101  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:05.021143  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:05.137417  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:05.137446  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:05.137464  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:05.253716  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:05.253817  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:07.806534  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:07.830348  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:07.830420  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:07.897440  296043 cri.go:89] found id: ""
	I0214 22:00:07.897462  296043 logs.go:282] 0 containers: []
	W0214 22:00:07.897471  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:07.897477  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:07.897524  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:07.945979  296043 cri.go:89] found id: ""
	I0214 22:00:07.946011  296043 logs.go:282] 0 containers: []
	W0214 22:00:07.946022  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:07.946031  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:07.946095  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:07.987427  296043 cri.go:89] found id: ""
	I0214 22:00:07.987452  296043 logs.go:282] 0 containers: []
	W0214 22:00:07.987462  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:07.987470  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:07.987523  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:08.031425  296043 cri.go:89] found id: ""
	I0214 22:00:08.031456  296043 logs.go:282] 0 containers: []
	W0214 22:00:08.031469  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:08.031479  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:08.031537  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:08.077740  296043 cri.go:89] found id: ""
	I0214 22:00:08.077763  296043 logs.go:282] 0 containers: []
	W0214 22:00:08.077772  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:08.077778  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:08.077826  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:08.119180  296043 cri.go:89] found id: ""
	I0214 22:00:08.119199  296043 logs.go:282] 0 containers: []
	W0214 22:00:08.119208  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:08.119216  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:08.119269  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:08.166737  296043 cri.go:89] found id: ""
	I0214 22:00:08.166766  296043 logs.go:282] 0 containers: []
	W0214 22:00:08.166781  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:08.166791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:08.166854  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:08.220193  296043 cri.go:89] found id: ""
	I0214 22:00:08.220222  296043 logs.go:282] 0 containers: []
	W0214 22:00:08.220233  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:08.220246  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:08.220261  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:08.278484  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:08.278525  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:08.296385  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:08.296469  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:08.393863  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:08.393891  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:08.393907  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:08.486916  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:08.486943  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:11.039000  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:11.052969  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:11.053076  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:11.092246  296043 cri.go:89] found id: ""
	I0214 22:00:11.092273  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.092284  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:11.092292  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:11.092345  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:11.140407  296043 cri.go:89] found id: ""
	I0214 22:00:11.140435  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.140443  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:11.140448  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:11.140508  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:11.182441  296043 cri.go:89] found id: ""
	I0214 22:00:11.182464  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.182473  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:11.182480  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:11.182525  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:11.236437  296043 cri.go:89] found id: ""
	I0214 22:00:11.236462  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.236472  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:11.236480  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:11.236535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:11.284521  296043 cri.go:89] found id: ""
	I0214 22:00:11.284552  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.284569  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:11.284575  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:11.284641  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:11.336127  296043 cri.go:89] found id: ""
	I0214 22:00:11.336160  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.336171  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:11.336179  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:11.336247  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:11.377084  296043 cri.go:89] found id: ""
	I0214 22:00:11.377114  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.377126  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:11.377133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:11.377192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:11.420249  296043 cri.go:89] found id: ""
	I0214 22:00:11.420278  296043 logs.go:282] 0 containers: []
	W0214 22:00:11.420287  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:11.420299  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:11.420313  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:11.489093  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:11.489130  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:11.505956  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:11.505987  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:11.593881  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:11.593914  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:11.593931  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:11.689647  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:11.689683  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:14.235222  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:14.250928  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:14.250991  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:14.310788  296043 cri.go:89] found id: ""
	I0214 22:00:14.310814  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.310824  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:14.310832  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:14.310879  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:14.359466  296043 cri.go:89] found id: ""
	I0214 22:00:14.359489  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.359500  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:14.359507  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:14.359553  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:14.409653  296043 cri.go:89] found id: ""
	I0214 22:00:14.409681  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.409692  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:14.409699  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:14.409766  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:14.457880  296043 cri.go:89] found id: ""
	I0214 22:00:14.457906  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.457918  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:14.457925  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:14.457980  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:14.505798  296043 cri.go:89] found id: ""
	I0214 22:00:14.505827  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.505837  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:14.505851  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:14.505917  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:14.557674  296043 cri.go:89] found id: ""
	I0214 22:00:14.557697  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.557705  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:14.557711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:14.557752  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:14.600069  296043 cri.go:89] found id: ""
	I0214 22:00:14.600104  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.600117  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:14.600127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:14.600188  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:14.653139  296043 cri.go:89] found id: ""
	I0214 22:00:14.653167  296043 logs.go:282] 0 containers: []
	W0214 22:00:14.653179  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:14.653193  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:14.653209  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:14.670488  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:14.670517  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:14.748957  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:14.748987  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:14.749004  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:14.841097  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:14.841132  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:14.898058  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:14.898080  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:17.465066  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:17.482867  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:17.482946  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:17.528365  296043 cri.go:89] found id: ""
	I0214 22:00:17.528395  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.528408  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:17.528416  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:17.528489  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:17.571601  296043 cri.go:89] found id: ""
	I0214 22:00:17.571629  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.571639  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:17.571645  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:17.571700  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:17.604745  296043 cri.go:89] found id: ""
	I0214 22:00:17.604778  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.604789  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:17.604797  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:17.604858  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:17.641832  296043 cri.go:89] found id: ""
	I0214 22:00:17.641861  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.641872  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:17.641880  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:17.641940  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:17.681527  296043 cri.go:89] found id: ""
	I0214 22:00:17.681558  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.681570  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:17.681578  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:17.681647  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:17.717296  296043 cri.go:89] found id: ""
	I0214 22:00:17.717322  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.717333  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:17.717342  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:17.717410  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:17.753075  296043 cri.go:89] found id: ""
	I0214 22:00:17.753101  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.753111  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:17.753117  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:17.753175  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:17.788560  296043 cri.go:89] found id: ""
	I0214 22:00:17.788592  296043 logs.go:282] 0 containers: []
	W0214 22:00:17.788602  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:17.788615  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:17.788628  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:17.839562  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:17.839591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:17.852392  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:17.852415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:17.922087  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:17.922117  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:17.922133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:18.008630  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:18.008670  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:20.563551  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:20.581276  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:20.581357  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:20.620717  296043 cri.go:89] found id: ""
	I0214 22:00:20.620750  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.620762  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:20.620777  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:20.620841  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:20.665295  296043 cri.go:89] found id: ""
	I0214 22:00:20.665324  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.665343  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:20.665352  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:20.665414  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:20.702927  296043 cri.go:89] found id: ""
	I0214 22:00:20.702954  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.702966  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:20.702973  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:20.703037  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:20.740567  296043 cri.go:89] found id: ""
	I0214 22:00:20.740596  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.740607  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:20.740615  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:20.740675  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:20.775586  296043 cri.go:89] found id: ""
	I0214 22:00:20.775616  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.775628  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:20.775635  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:20.775694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:20.813936  296043 cri.go:89] found id: ""
	I0214 22:00:20.813973  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.813984  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:20.813994  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:20.814062  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:20.854158  296043 cri.go:89] found id: ""
	I0214 22:00:20.854188  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.854196  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:20.854202  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:20.854259  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:20.886390  296043 cri.go:89] found id: ""
	I0214 22:00:20.886418  296043 logs.go:282] 0 containers: []
	W0214 22:00:20.886438  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:20.886448  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:20.886470  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:20.934236  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:20.934275  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:21.013107  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:21.013159  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:21.031890  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:21.031935  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:21.114571  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:21.114595  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:21.114613  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:23.706783  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:23.722898  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:23.722989  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:23.767172  296043 cri.go:89] found id: ""
	I0214 22:00:23.767205  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.767217  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:23.767226  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:23.767280  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:23.813784  296043 cri.go:89] found id: ""
	I0214 22:00:23.813821  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.813834  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:23.813844  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:23.813919  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:23.851969  296043 cri.go:89] found id: ""
	I0214 22:00:23.851999  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.852013  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:23.852023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:23.852090  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:23.891306  296043 cri.go:89] found id: ""
	I0214 22:00:23.891333  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.891350  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:23.891359  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:23.891419  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:23.936303  296043 cri.go:89] found id: ""
	I0214 22:00:23.936333  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.936352  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:23.936360  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:23.936418  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:23.982506  296043 cri.go:89] found id: ""
	I0214 22:00:23.982541  296043 logs.go:282] 0 containers: []
	W0214 22:00:23.982554  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:23.982561  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:23.982642  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:24.024288  296043 cri.go:89] found id: ""
	I0214 22:00:24.024318  296043 logs.go:282] 0 containers: []
	W0214 22:00:24.024328  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:24.024336  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:24.024409  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:24.066580  296043 cri.go:89] found id: ""
	I0214 22:00:24.066609  296043 logs.go:282] 0 containers: []
	W0214 22:00:24.066620  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:24.066668  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:24.066684  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:24.134196  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:24.134229  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:24.151406  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:24.151435  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:24.237527  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:24.237558  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:24.237575  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:24.353662  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:24.353699  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:26.904024  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:26.918476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:26.918558  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:26.956449  296043 cri.go:89] found id: ""
	I0214 22:00:26.956480  296043 logs.go:282] 0 containers: []
	W0214 22:00:26.956493  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:26.956500  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:26.956568  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:27.000428  296043 cri.go:89] found id: ""
	I0214 22:00:27.000464  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.000478  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:27.000486  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:27.000553  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:27.046974  296043 cri.go:89] found id: ""
	I0214 22:00:27.046999  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.047010  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:27.047018  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:27.047082  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:27.083828  296043 cri.go:89] found id: ""
	I0214 22:00:27.083868  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.083880  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:27.083890  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:27.083957  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:27.124026  296043 cri.go:89] found id: ""
	I0214 22:00:27.124066  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.124078  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:27.124087  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:27.124152  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:27.164233  296043 cri.go:89] found id: ""
	I0214 22:00:27.164263  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.164274  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:27.164283  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:27.164349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:27.208532  296043 cri.go:89] found id: ""
	I0214 22:00:27.208557  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.208566  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:27.208574  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:27.208637  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:27.251024  296043 cri.go:89] found id: ""
	I0214 22:00:27.251058  296043 logs.go:282] 0 containers: []
	W0214 22:00:27.251070  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:27.251083  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:27.251115  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:27.296695  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:27.296727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:27.364388  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:27.364432  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:27.380303  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:27.380340  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:27.450137  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:27.450164  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:27.450182  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:30.053921  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:30.070885  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:30.070946  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:30.117500  296043 cri.go:89] found id: ""
	I0214 22:00:30.117525  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.117535  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:30.117543  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:30.117599  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:30.162182  296043 cri.go:89] found id: ""
	I0214 22:00:30.162215  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.162228  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:30.162236  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:30.162299  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:30.205445  296043 cri.go:89] found id: ""
	I0214 22:00:30.205478  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.205489  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:30.205497  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:30.205562  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:30.246462  296043 cri.go:89] found id: ""
	I0214 22:00:30.246490  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.246500  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:30.246509  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:30.246575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:30.296127  296043 cri.go:89] found id: ""
	I0214 22:00:30.296154  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.296165  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:30.296173  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:30.296229  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:30.344621  296043 cri.go:89] found id: ""
	I0214 22:00:30.344647  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.344657  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:30.344665  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:30.344719  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:30.390470  296043 cri.go:89] found id: ""
	I0214 22:00:30.390500  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.390512  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:30.390521  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:30.390585  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:30.436575  296043 cri.go:89] found id: ""
	I0214 22:00:30.436601  296043 logs.go:282] 0 containers: []
	W0214 22:00:30.436613  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:30.436625  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:30.436641  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:30.452635  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:30.452664  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:30.534935  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:30.534953  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:30.534963  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:30.624160  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:30.624189  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:30.690165  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:30.690200  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:33.257457  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:33.273582  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:33.273649  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:33.309979  296043 cri.go:89] found id: ""
	I0214 22:00:33.310010  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.310021  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:33.310031  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:33.310086  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:33.348650  296043 cri.go:89] found id: ""
	I0214 22:00:33.348679  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.348689  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:33.348699  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:33.348755  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:33.392437  296043 cri.go:89] found id: ""
	I0214 22:00:33.392461  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.392470  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:33.392476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:33.392522  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:33.427444  296043 cri.go:89] found id: ""
	I0214 22:00:33.427467  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.427476  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:33.427484  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:33.427533  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:33.469383  296043 cri.go:89] found id: ""
	I0214 22:00:33.469409  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.469419  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:33.469426  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:33.469477  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:33.504249  296043 cri.go:89] found id: ""
	I0214 22:00:33.504277  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.504289  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:33.504297  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:33.504360  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:33.544299  296043 cri.go:89] found id: ""
	I0214 22:00:33.544321  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.544332  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:33.544340  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:33.544390  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:33.583250  296043 cri.go:89] found id: ""
	I0214 22:00:33.583277  296043 logs.go:282] 0 containers: []
	W0214 22:00:33.583289  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:33.583304  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:33.583318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:33.671681  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:33.671727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:33.716366  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:33.716396  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:33.774343  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:33.774370  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:33.787883  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:33.787906  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:33.862768  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:36.363651  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:36.401793  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:36.401856  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:36.441237  296043 cri.go:89] found id: ""
	I0214 22:00:36.441263  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.441273  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:36.441281  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:36.441347  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:36.485483  296043 cri.go:89] found id: ""
	I0214 22:00:36.485503  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.485510  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:36.485515  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:36.485560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:36.526106  296043 cri.go:89] found id: ""
	I0214 22:00:36.526134  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.526144  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:36.526151  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:36.526219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:36.566896  296043 cri.go:89] found id: ""
	I0214 22:00:36.566926  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.566937  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:36.566945  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:36.567015  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:36.608255  296043 cri.go:89] found id: ""
	I0214 22:00:36.608281  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.608293  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:36.608301  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:36.608361  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:36.646176  296043 cri.go:89] found id: ""
	I0214 22:00:36.646206  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.646216  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:36.646224  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:36.646276  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:36.686363  296043 cri.go:89] found id: ""
	I0214 22:00:36.686390  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.686444  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:36.686457  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:36.686511  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:36.719622  296043 cri.go:89] found id: ""
	I0214 22:00:36.719651  296043 logs.go:282] 0 containers: []
	W0214 22:00:36.719661  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:36.719674  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:36.719690  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:36.772428  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:36.772453  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:36.786581  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:36.786609  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:36.876425  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:36.876444  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:36.876460  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:36.954714  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:36.954740  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:39.500037  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:39.520812  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:39.520889  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:39.562216  296043 cri.go:89] found id: ""
	I0214 22:00:39.562250  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.562263  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:39.562271  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:39.562336  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:39.601201  296043 cri.go:89] found id: ""
	I0214 22:00:39.601234  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.601247  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:39.601255  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:39.601315  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:39.640202  296043 cri.go:89] found id: ""
	I0214 22:00:39.640231  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.640242  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:39.640250  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:39.640307  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:39.674932  296043 cri.go:89] found id: ""
	I0214 22:00:39.674960  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.674972  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:39.674981  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:39.675042  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:39.724788  296043 cri.go:89] found id: ""
	I0214 22:00:39.724820  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.724833  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:39.724841  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:39.724908  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:39.771267  296043 cri.go:89] found id: ""
	I0214 22:00:39.771295  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.771306  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:39.771314  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:39.771369  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:39.810824  296043 cri.go:89] found id: ""
	I0214 22:00:39.810852  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.810864  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:39.810871  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:39.810933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:39.852769  296043 cri.go:89] found id: ""
	I0214 22:00:39.852794  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.852803  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:39.852815  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:39.852831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:39.906779  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:39.906808  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:39.924045  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:39.924072  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:40.027558  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:40.027580  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:40.027594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:40.130386  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:40.130415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:42.679860  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:42.699140  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:42.699212  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:42.744951  296043 cri.go:89] found id: ""
	I0214 22:00:42.744980  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.744992  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:42.745002  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:42.745061  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:42.795928  296043 cri.go:89] found id: ""
	I0214 22:00:42.795960  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.795973  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:42.795981  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:42.796051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:42.850295  296043 cri.go:89] found id: ""
	I0214 22:00:42.850330  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.850344  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:42.850354  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:42.850427  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:42.913832  296043 cri.go:89] found id: ""
	I0214 22:00:42.913862  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.913874  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:42.913884  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:42.913947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:42.983499  296043 cri.go:89] found id: ""
	I0214 22:00:42.983589  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.983607  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:42.983615  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:42.983689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:43.037301  296043 cri.go:89] found id: ""
	I0214 22:00:43.037331  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.037343  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:43.037351  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:43.037419  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:43.084109  296043 cri.go:89] found id: ""
	I0214 22:00:43.084141  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.084153  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:43.084161  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:43.084233  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:43.139429  296043 cri.go:89] found id: ""
	I0214 22:00:43.139460  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.139473  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:43.139486  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:43.139503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:43.203986  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:43.204033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:43.221265  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:43.221297  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:43.326457  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:43.326485  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:43.326510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:43.450012  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:43.450053  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.020884  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:46.036692  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:46.036773  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:46.078455  296043 cri.go:89] found id: ""
	I0214 22:00:46.078496  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.078510  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:46.078521  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:46.078599  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:46.126385  296043 cri.go:89] found id: ""
	I0214 22:00:46.126418  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.126430  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:46.126438  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:46.126505  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:46.174790  296043 cri.go:89] found id: ""
	I0214 22:00:46.174823  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.174836  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:46.174844  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:46.174911  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:46.236219  296043 cri.go:89] found id: ""
	I0214 22:00:46.236264  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.236276  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:46.236284  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:46.236349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:46.279991  296043 cri.go:89] found id: ""
	I0214 22:00:46.280019  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.280031  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:46.280038  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:46.280112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:46.316834  296043 cri.go:89] found id: ""
	I0214 22:00:46.316866  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.316878  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:46.316887  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:46.316951  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:46.355156  296043 cri.go:89] found id: ""
	I0214 22:00:46.355183  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.355192  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:46.355198  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:46.355252  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:46.400157  296043 cri.go:89] found id: ""
	I0214 22:00:46.400184  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.400193  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:46.400204  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:46.400220  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.451755  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:46.451791  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:46.527757  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:46.527804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:46.544748  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:46.544789  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:46.629059  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:46.629085  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:46.629101  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:49.216868  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:49.235561  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:49.235639  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:49.291785  296043 cri.go:89] found id: ""
	I0214 22:00:49.291817  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.291830  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:49.291840  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:49.291901  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:49.340347  296043 cri.go:89] found id: ""
	I0214 22:00:49.340374  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.340385  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:49.340393  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:49.340446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:49.386999  296043 cri.go:89] found id: ""
	I0214 22:00:49.387030  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.387041  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:49.387048  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:49.387114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:49.433819  296043 cri.go:89] found id: ""
	I0214 22:00:49.433849  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.433861  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:49.433868  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:49.433930  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:49.477406  296043 cri.go:89] found id: ""
	I0214 22:00:49.477453  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.477467  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:49.477478  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:49.477560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:49.522581  296043 cri.go:89] found id: ""
	I0214 22:00:49.522618  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.522648  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:49.522657  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:49.522721  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:49.560370  296043 cri.go:89] found id: ""
	I0214 22:00:49.560399  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.560410  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:49.560418  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:49.560479  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:49.600705  296043 cri.go:89] found id: ""
	I0214 22:00:49.600738  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.600751  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:49.600765  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:49.600787  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:49.692921  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:49.693003  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:49.715093  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:49.715190  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:49.819499  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:49.819529  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:49.819546  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:49.955944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:49.955994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:52.528580  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:52.545309  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:52.545394  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:52.587415  296043 cri.go:89] found id: ""
	I0214 22:00:52.587446  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.587458  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:52.587466  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:52.587534  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:52.647538  296043 cri.go:89] found id: ""
	I0214 22:00:52.647649  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.647668  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:52.647677  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:52.647749  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:52.700570  296043 cri.go:89] found id: ""
	I0214 22:00:52.700603  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.700615  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:52.700624  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:52.700687  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:52.740732  296043 cri.go:89] found id: ""
	I0214 22:00:52.740764  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.740775  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:52.740782  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:52.740846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:52.781456  296043 cri.go:89] found id: ""
	I0214 22:00:52.781491  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.781503  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:52.781512  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:52.781581  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:52.829342  296043 cri.go:89] found id: ""
	I0214 22:00:52.829380  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.829392  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:52.829400  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:52.829471  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:52.879000  296043 cri.go:89] found id: ""
	I0214 22:00:52.879033  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.879045  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:52.879053  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:52.879127  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:52.923620  296043 cri.go:89] found id: ""
	I0214 22:00:52.923667  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.923680  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:52.923698  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:52.923717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:53.052613  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:53.052665  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:53.105757  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:53.105848  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:53.188362  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:53.188408  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:53.210408  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:53.210462  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:53.308816  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:55.810467  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:55.825649  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:55.825701  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:55.861736  296043 cri.go:89] found id: ""
	I0214 22:00:55.861759  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.861769  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:55.861776  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:55.861826  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:55.903282  296043 cri.go:89] found id: ""
	I0214 22:00:55.903318  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.903330  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:55.903352  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:55.903423  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:55.948890  296043 cri.go:89] found id: ""
	I0214 22:00:55.948919  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.948930  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:55.948937  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:55.948992  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:55.994279  296043 cri.go:89] found id: ""
	I0214 22:00:55.994307  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.994316  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:55.994321  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:55.994376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:56.039497  296043 cri.go:89] found id: ""
	I0214 22:00:56.039539  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.039551  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:56.039563  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:56.039630  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:56.079255  296043 cri.go:89] found id: ""
	I0214 22:00:56.079284  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.079294  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:56.079303  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:56.079367  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:56.121581  296043 cri.go:89] found id: ""
	I0214 22:00:56.121610  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.121622  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:56.121630  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:56.121689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:56.175042  296043 cri.go:89] found id: ""
	I0214 22:00:56.175066  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.175076  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:56.175089  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:56.175103  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:56.229769  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:56.229804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:56.243975  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:56.244001  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:56.319958  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:56.319982  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:56.319996  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:56.406004  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:56.406031  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:58.959819  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:58.975738  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:58.975799  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:59.016692  296043 cri.go:89] found id: ""
	I0214 22:00:59.016722  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.016734  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:59.016742  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:59.016794  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:59.056462  296043 cri.go:89] found id: ""
	I0214 22:00:59.056486  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.056495  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:59.056504  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:59.056554  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:59.102865  296043 cri.go:89] found id: ""
	I0214 22:00:59.102893  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.102904  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:59.102911  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:59.102977  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:59.139163  296043 cri.go:89] found id: ""
	I0214 22:00:59.139189  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.139199  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:59.139204  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:59.139256  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:59.184113  296043 cri.go:89] found id: ""
	I0214 22:00:59.184142  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.184153  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:59.184160  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:59.184226  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:59.231073  296043 cri.go:89] found id: ""
	I0214 22:00:59.231104  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.231113  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:59.231123  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:59.231304  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:59.284699  296043 cri.go:89] found id: ""
	I0214 22:00:59.284723  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.284733  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:59.284741  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:59.284793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:59.337079  296043 cri.go:89] found id: ""
	I0214 22:00:59.337100  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.337107  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:59.337116  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:59.337133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:59.410337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:59.410365  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:59.410380  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:59.492678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:59.492710  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:59.535993  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:59.536022  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:59.596863  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:59.596889  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:02.111615  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:02.130034  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:02.130098  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:02.167633  296043 cri.go:89] found id: ""
	I0214 22:01:02.167669  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.167679  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:02.167687  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:02.167754  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:02.206752  296043 cri.go:89] found id: ""
	I0214 22:01:02.206778  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.206787  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:02.206793  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:02.206848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:02.242991  296043 cri.go:89] found id: ""
	I0214 22:01:02.243021  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.243033  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:02.243045  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:02.243112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:02.284141  296043 cri.go:89] found id: ""
	I0214 22:01:02.284164  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.284172  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:02.284178  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:02.284217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:02.329547  296043 cri.go:89] found id: ""
	I0214 22:01:02.329570  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.329577  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:02.329583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:02.329627  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:02.370731  296043 cri.go:89] found id: ""
	I0214 22:01:02.370758  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.370769  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:02.370778  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:02.370834  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:02.419069  296043 cri.go:89] found id: ""
	I0214 22:01:02.419102  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.419114  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:02.419122  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:02.419199  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:02.464600  296043 cri.go:89] found id: ""
	I0214 22:01:02.464636  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.464655  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:02.464670  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:02.464690  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:02.480854  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:02.480890  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:02.572148  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:02.572175  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:02.572191  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:02.686587  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:02.686646  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:02.734413  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:02.734443  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.297012  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:05.310239  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:05.310303  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:05.344855  296043 cri.go:89] found id: ""
	I0214 22:01:05.344884  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.344895  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:05.344905  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:05.344962  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:05.390466  296043 cri.go:89] found id: ""
	I0214 22:01:05.390498  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.390510  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:05.390518  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:05.390575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:05.442562  296043 cri.go:89] found id: ""
	I0214 22:01:05.442598  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.442611  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:05.442619  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:05.442707  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:05.482534  296043 cri.go:89] found id: ""
	I0214 22:01:05.482562  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.482577  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:05.482583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:05.482659  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:05.526775  296043 cri.go:89] found id: ""
	I0214 22:01:05.526802  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.526813  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:05.526821  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:05.526887  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:05.566945  296043 cri.go:89] found id: ""
	I0214 22:01:05.566971  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.566979  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:05.566991  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:05.567050  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:05.610803  296043 cri.go:89] found id: ""
	I0214 22:01:05.610836  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.610849  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:05.610857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:05.610934  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:05.658446  296043 cri.go:89] found id: ""
	I0214 22:01:05.658475  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.658485  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:05.658497  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:05.658512  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:05.731902  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:05.731929  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:05.731942  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:05.842065  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:05.842098  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:05.903308  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:05.903343  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.975417  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:05.975516  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:08.494769  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.514374  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:08.514458  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:08.561822  296043 cri.go:89] found id: ""
	I0214 22:01:08.561850  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.561859  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:08.561865  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:08.561912  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:08.602005  296043 cri.go:89] found id: ""
	I0214 22:01:08.602038  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.602051  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:08.602059  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:08.602136  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:08.642584  296043 cri.go:89] found id: ""
	I0214 22:01:08.642612  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.642636  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:08.642647  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:08.642725  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:08.677455  296043 cri.go:89] found id: ""
	I0214 22:01:08.677490  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.677506  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:08.677514  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:08.677579  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:08.723982  296043 cri.go:89] found id: ""
	I0214 22:01:08.724032  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.724046  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:08.724056  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:08.724129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:08.775467  296043 cri.go:89] found id: ""
	I0214 22:01:08.775503  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.775516  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:08.775525  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:08.775587  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:08.820143  296043 cri.go:89] found id: ""
	I0214 22:01:08.820187  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.820209  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:08.820218  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:08.820289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:08.855406  296043 cri.go:89] found id: ""
	I0214 22:01:08.855437  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.855448  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:08.855460  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:08.855476  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:08.914025  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:08.914052  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:08.927679  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:08.927708  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:09.029673  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:09.029699  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:09.029717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:09.113311  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:09.113358  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:11.659812  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:11.673901  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:11.673974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:11.710824  296043 cri.go:89] found id: ""
	I0214 22:01:11.710856  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.710868  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:11.710877  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:11.710939  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:11.749955  296043 cri.go:89] found id: ""
	I0214 22:01:11.749996  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.750009  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:11.750034  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:11.750109  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:11.784268  296043 cri.go:89] found id: ""
	I0214 22:01:11.784296  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.784308  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:11.784317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:11.784381  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:11.822362  296043 cri.go:89] found id: ""
	I0214 22:01:11.822387  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.822395  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:11.822401  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:11.822462  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:11.860753  296043 cri.go:89] found id: ""
	I0214 22:01:11.860778  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.860786  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:11.860791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:11.860833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:11.901670  296043 cri.go:89] found id: ""
	I0214 22:01:11.901697  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.901709  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:11.901717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:11.901779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:11.939194  296043 cri.go:89] found id: ""
	I0214 22:01:11.939220  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.939230  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:11.939236  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:11.939289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:11.973819  296043 cri.go:89] found id: ""
	I0214 22:01:11.973846  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.973857  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:11.973869  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:11.973882  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:12.052290  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:12.052321  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:12.099732  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:12.099775  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:12.163962  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:12.163994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:12.181579  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:12.181625  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:12.272639  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:14.774322  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:14.787244  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:14.787299  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:14.820977  296043 cri.go:89] found id: ""
	I0214 22:01:14.821011  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.821024  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:14.821034  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:14.821099  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:14.852858  296043 cri.go:89] found id: ""
	I0214 22:01:14.852879  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.852888  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:14.852893  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:14.852947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:14.896441  296043 cri.go:89] found id: ""
	I0214 22:01:14.896464  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.896475  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:14.896483  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:14.896535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:14.930673  296043 cri.go:89] found id: ""
	I0214 22:01:14.930700  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.930712  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:14.930719  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:14.930776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:14.972676  296043 cri.go:89] found id: ""
	I0214 22:01:14.972708  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.972721  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:14.972729  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:14.972797  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:15.009271  296043 cri.go:89] found id: ""
	I0214 22:01:15.009303  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.009314  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:15.009323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:15.009406  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:15.045975  296043 cri.go:89] found id: ""
	I0214 22:01:15.046007  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.046021  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:15.046029  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:15.046102  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:15.084924  296043 cri.go:89] found id: ""
	I0214 22:01:15.084956  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.084967  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:15.084980  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:15.084995  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:15.143553  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:15.143587  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:15.158649  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:15.158687  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:15.235319  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:15.235343  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:15.235363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:15.324951  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:15.324990  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:17.869522  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:17.886022  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:17.886114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:17.926259  296043 cri.go:89] found id: ""
	I0214 22:01:17.926287  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.926296  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:17.926302  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:17.926358  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:17.989648  296043 cri.go:89] found id: ""
	I0214 22:01:17.989675  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.989683  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:17.989689  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:17.989744  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:18.041262  296043 cri.go:89] found id: ""
	I0214 22:01:18.041295  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.041307  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:18.041315  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:18.041380  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:18.080028  296043 cri.go:89] found id: ""
	I0214 22:01:18.080059  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.080069  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:18.080075  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:18.080134  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:18.116135  296043 cri.go:89] found id: ""
	I0214 22:01:18.116163  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.116172  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:18.116179  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:18.116239  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:18.148268  296043 cri.go:89] found id: ""
	I0214 22:01:18.148302  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.148315  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:18.148323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:18.148399  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:18.180352  296043 cri.go:89] found id: ""
	I0214 22:01:18.180378  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.180388  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:18.180394  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:18.180438  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:18.211513  296043 cri.go:89] found id: ""
	I0214 22:01:18.211534  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.211541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:18.211551  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:18.211562  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:18.260797  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:18.260831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:18.273477  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:18.273503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:18.340163  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:18.340182  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:18.340193  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:18.413927  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:18.413950  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:20.952238  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:20.964925  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:20.964984  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:21.000265  296043 cri.go:89] found id: ""
	I0214 22:01:21.000295  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.000306  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:21.000314  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:21.000376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:21.042754  296043 cri.go:89] found id: ""
	I0214 22:01:21.042780  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.042790  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:21.042798  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:21.042862  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:21.078636  296043 cri.go:89] found id: ""
	I0214 22:01:21.078664  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.078676  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:21.078684  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:21.078747  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:21.112023  296043 cri.go:89] found id: ""
	I0214 22:01:21.112050  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.112058  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:21.112067  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:21.112129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:21.147419  296043 cri.go:89] found id: ""
	I0214 22:01:21.147451  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.147462  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:21.147470  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:21.147541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:21.180151  296043 cri.go:89] found id: ""
	I0214 22:01:21.180191  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.180201  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:21.180209  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:21.180271  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:21.215007  296043 cri.go:89] found id: ""
	I0214 22:01:21.215037  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.215049  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:21.215057  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:21.215122  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:21.247912  296043 cri.go:89] found id: ""
	I0214 22:01:21.247953  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.247964  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:21.247976  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:21.247992  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:21.300392  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:21.300429  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:21.313583  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:21.313604  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:21.381863  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:21.381888  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:21.381902  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:21.460562  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:21.460591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:24.002770  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:24.015631  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:24.015700  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:24.051601  296043 cri.go:89] found id: ""
	I0214 22:01:24.051637  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.051649  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:24.051657  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:24.051710  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:24.084938  296043 cri.go:89] found id: ""
	I0214 22:01:24.084963  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.084971  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:24.084977  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:24.085019  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:24.118982  296043 cri.go:89] found id: ""
	I0214 22:01:24.119012  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.119023  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:24.119030  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:24.119091  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:24.150809  296043 cri.go:89] found id: ""
	I0214 22:01:24.150838  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.150849  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:24.150857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:24.150927  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:24.180499  296043 cri.go:89] found id: ""
	I0214 22:01:24.180527  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.180538  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:24.180546  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:24.180613  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:24.214503  296043 cri.go:89] found id: ""
	I0214 22:01:24.214531  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.214542  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:24.214550  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:24.214616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:24.250992  296043 cri.go:89] found id: ""
	I0214 22:01:24.251018  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.251026  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:24.251032  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:24.251090  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:24.287791  296043 cri.go:89] found id: ""
	I0214 22:01:24.287816  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.287824  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:24.287839  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:24.287854  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:24.324499  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:24.324533  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:24.373673  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:24.373700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:24.387527  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:24.387558  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:24.464362  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:24.464394  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:24.464409  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:27.040249  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:27.052990  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:27.053055  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:27.092109  296043 cri.go:89] found id: ""
	I0214 22:01:27.092138  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.092150  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:27.092158  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:27.092219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:27.128290  296043 cri.go:89] found id: ""
	I0214 22:01:27.128323  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.128336  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:27.128344  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:27.128413  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:27.166086  296043 cri.go:89] found id: ""
	I0214 22:01:27.166113  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.166121  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:27.166127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:27.166174  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:27.198082  296043 cri.go:89] found id: ""
	I0214 22:01:27.198114  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.198126  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:27.198133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:27.198196  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:27.229133  296043 cri.go:89] found id: ""
	I0214 22:01:27.229167  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.229182  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:27.229190  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:27.229253  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:27.267454  296043 cri.go:89] found id: ""
	I0214 22:01:27.267483  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.267495  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:27.267504  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:27.267570  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:27.306235  296043 cri.go:89] found id: ""
	I0214 22:01:27.306265  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.306277  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:27.306289  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:27.306368  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:27.337862  296043 cri.go:89] found id: ""
	I0214 22:01:27.337894  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.337905  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:27.337916  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:27.337928  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:27.384978  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:27.385007  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:27.398968  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:27.398999  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:27.468335  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:27.468363  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:27.468379  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:27.549329  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:27.549363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:30.097135  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:30.110653  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:30.110740  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:30.148484  296043 cri.go:89] found id: ""
	I0214 22:01:30.148518  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.148530  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:30.148538  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:30.148611  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:30.183761  296043 cri.go:89] found id: ""
	I0214 22:01:30.183791  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.183802  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:30.183809  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:30.183866  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:30.216232  296043 cri.go:89] found id: ""
	I0214 22:01:30.216260  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.216271  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:30.216278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:30.216346  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:30.248173  296043 cri.go:89] found id: ""
	I0214 22:01:30.248199  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.248210  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:30.248217  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:30.248281  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:30.283288  296043 cri.go:89] found id: ""
	I0214 22:01:30.283318  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.283329  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:30.283350  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:30.283402  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:30.324270  296043 cri.go:89] found id: ""
	I0214 22:01:30.324297  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.324308  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:30.324317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:30.324373  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:30.360122  296043 cri.go:89] found id: ""
	I0214 22:01:30.360146  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.360154  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:30.360159  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:30.360207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:30.394546  296043 cri.go:89] found id: ""
	I0214 22:01:30.394571  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.394580  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:30.394594  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:30.394613  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:30.449231  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:30.449258  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:30.463475  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:30.463499  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:30.536719  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:30.536746  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:30.536762  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:30.619446  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:30.619484  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:33.159018  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:33.176759  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:33.176842  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:33.216502  296043 cri.go:89] found id: ""
	I0214 22:01:33.216527  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.216536  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:33.216542  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:33.216597  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:33.254772  296043 cri.go:89] found id: ""
	I0214 22:01:33.254799  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.254810  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:33.254817  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:33.254878  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:33.287687  296043 cri.go:89] found id: ""
	I0214 22:01:33.287713  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.287722  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:33.287728  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:33.287790  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:33.319969  296043 cri.go:89] found id: ""
	I0214 22:01:33.319990  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.319997  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:33.320002  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:33.320046  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:33.352720  296043 cri.go:89] found id: ""
	I0214 22:01:33.352740  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.352747  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:33.352752  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:33.352807  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:33.390638  296043 cri.go:89] found id: ""
	I0214 22:01:33.390662  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.390671  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:33.390678  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:33.390730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:33.425935  296043 cri.go:89] found id: ""
	I0214 22:01:33.425954  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.425962  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:33.425967  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:33.426012  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:33.459671  296043 cri.go:89] found id: ""
	I0214 22:01:33.459695  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.459705  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:33.459716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:33.459730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:33.535469  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:33.535493  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:33.570473  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:33.570501  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:33.619720  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:33.619745  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:33.631829  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:33.631850  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:33.701637  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.202577  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:36.216700  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:36.216761  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:36.250764  296043 cri.go:89] found id: ""
	I0214 22:01:36.250789  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.250798  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:36.250804  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:36.250853  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:36.284811  296043 cri.go:89] found id: ""
	I0214 22:01:36.284838  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.284850  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:36.284857  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:36.284916  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:36.321197  296043 cri.go:89] found id: ""
	I0214 22:01:36.321219  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.321227  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:36.321235  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:36.321277  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:36.354869  296043 cri.go:89] found id: ""
	I0214 22:01:36.354896  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.354907  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:36.354915  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:36.354967  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:36.393688  296043 cri.go:89] found id: ""
	I0214 22:01:36.393712  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.393722  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:36.393730  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:36.393781  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:36.427985  296043 cri.go:89] found id: ""
	I0214 22:01:36.428006  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.428015  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:36.428023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:36.428076  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:36.458367  296043 cri.go:89] found id: ""
	I0214 22:01:36.458386  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.458393  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:36.458398  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:36.458446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:36.489038  296043 cri.go:89] found id: ""
	I0214 22:01:36.489061  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.489069  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:36.489080  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:36.489093  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:36.526950  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:36.526971  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:36.577258  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:36.577293  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:36.589545  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:36.589567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:36.658634  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.658656  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:36.658674  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:39.231339  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:39.244717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:39.244765  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:39.277734  296043 cri.go:89] found id: ""
	I0214 22:01:39.277756  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.277766  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:39.277773  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:39.277836  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:39.309896  296043 cri.go:89] found id: ""
	I0214 22:01:39.309916  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.309923  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:39.309931  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:39.309979  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:39.342579  296043 cri.go:89] found id: ""
	I0214 22:01:39.342608  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.342619  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:39.342637  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:39.342686  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:39.378083  296043 cri.go:89] found id: ""
	I0214 22:01:39.378112  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.378124  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:39.378134  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:39.378192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:39.414803  296043 cri.go:89] found id: ""
	I0214 22:01:39.414828  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.414842  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:39.414850  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:39.414904  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:39.449659  296043 cri.go:89] found id: ""
	I0214 22:01:39.449690  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.449702  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:39.449711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:39.449778  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:39.486261  296043 cri.go:89] found id: ""
	I0214 22:01:39.486288  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.486300  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:39.486308  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:39.486371  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:39.518224  296043 cri.go:89] found id: ""
	I0214 22:01:39.518245  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.518253  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:39.518264  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:39.518277  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:39.598112  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:39.598145  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:39.634704  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:39.634727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:39.685193  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:39.685217  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:39.697332  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:39.697355  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:39.773514  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.273720  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:42.290415  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:42.290491  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:42.329509  296043 cri.go:89] found id: ""
	I0214 22:01:42.329539  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.329549  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:42.329556  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:42.329616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:42.366218  296043 cri.go:89] found id: ""
	I0214 22:01:42.366247  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.366259  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:42.366267  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:42.366324  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:42.404603  296043 cri.go:89] found id: ""
	I0214 22:01:42.404627  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.404634  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:42.404641  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:42.404691  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:42.437980  296043 cri.go:89] found id: ""
	I0214 22:01:42.438008  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.438017  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:42.438023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:42.438072  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:42.470475  296043 cri.go:89] found id: ""
	I0214 22:01:42.470505  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.470517  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:42.470526  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:42.470592  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:42.503557  296043 cri.go:89] found id: ""
	I0214 22:01:42.503593  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.503606  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:42.503614  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:42.503681  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:42.537499  296043 cri.go:89] found id: ""
	I0214 22:01:42.537549  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.537559  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:42.537568  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:42.537629  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:42.581710  296043 cri.go:89] found id: ""
	I0214 22:01:42.581740  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.581752  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:42.581765  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:42.581785  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:42.594891  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:42.594920  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:42.675186  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.675207  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:42.675221  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:42.762000  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:42.762033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:42.813591  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:42.813644  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.368276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:45.383477  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:45.383541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:45.419199  296043 cri.go:89] found id: ""
	I0214 22:01:45.419226  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.419235  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:45.419242  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:45.419286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:45.457708  296043 cri.go:89] found id: ""
	I0214 22:01:45.457740  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.457752  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:45.457761  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:45.457831  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:45.497110  296043 cri.go:89] found id: ""
	I0214 22:01:45.497138  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.497146  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:45.497154  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:45.497220  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:45.534294  296043 cri.go:89] found id: ""
	I0214 22:01:45.534318  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.534326  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:45.534333  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:45.534392  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:45.575462  296043 cri.go:89] found id: ""
	I0214 22:01:45.575492  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.575504  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:45.575513  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:45.575573  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:45.615590  296043 cri.go:89] found id: ""
	I0214 22:01:45.615620  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.615631  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:45.615639  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:45.615694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:45.655779  296043 cri.go:89] found id: ""
	I0214 22:01:45.655813  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.655826  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:45.655834  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:45.655903  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:45.691350  296043 cri.go:89] found id: ""
	I0214 22:01:45.691376  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.691386  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:45.691395  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:45.691407  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.749784  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:45.749833  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:45.764193  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:45.764225  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:45.836887  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:45.836914  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:45.836930  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:45.943944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:45.943974  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.486718  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:48.500667  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:48.500730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:48.539749  296043 cri.go:89] found id: ""
	I0214 22:01:48.539775  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.539785  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:48.539794  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:48.539846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:48.576675  296043 cri.go:89] found id: ""
	I0214 22:01:48.576703  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.576714  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:48.576723  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:48.576776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:48.608593  296043 cri.go:89] found id: ""
	I0214 22:01:48.608618  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.608627  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:48.608634  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:48.608684  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:48.644181  296043 cri.go:89] found id: ""
	I0214 22:01:48.644210  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.644221  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:48.644228  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:48.644280  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:48.681188  296043 cri.go:89] found id: ""
	I0214 22:01:48.681214  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.681224  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:48.681232  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:48.681286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:48.719817  296043 cri.go:89] found id: ""
	I0214 22:01:48.719847  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.719857  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:48.719865  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:48.719922  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:48.756080  296043 cri.go:89] found id: ""
	I0214 22:01:48.756107  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.756119  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:48.756127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:48.756188  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:48.796664  296043 cri.go:89] found id: ""
	I0214 22:01:48.796692  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.796703  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:48.796716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:48.796730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:48.877633  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:48.877660  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.924693  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:48.924726  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:48.980014  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:48.980045  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:48.993129  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:48.993153  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:49.067409  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:51.568106  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:51.583193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:51.583254  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:51.620026  296043 cri.go:89] found id: ""
	I0214 22:01:51.620050  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.620058  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:51.620063  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:51.620120  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:51.654068  296043 cri.go:89] found id: ""
	I0214 22:01:51.654103  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.654114  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:51.654122  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:51.654176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:51.689022  296043 cri.go:89] found id: ""
	I0214 22:01:51.689047  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.689055  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:51.689062  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:51.689118  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:51.725479  296043 cri.go:89] found id: ""
	I0214 22:01:51.725503  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.725513  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:51.725524  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:51.725576  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:51.761617  296043 cri.go:89] found id: ""
	I0214 22:01:51.761644  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.761653  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:51.761660  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:51.761719  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:51.802942  296043 cri.go:89] found id: ""
	I0214 22:01:51.802963  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.802972  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:51.802979  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:51.803027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:51.843214  296043 cri.go:89] found id: ""
	I0214 22:01:51.843242  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.843252  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:51.843264  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:51.843316  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:51.910513  296043 cri.go:89] found id: ""
	I0214 22:01:51.910550  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.910562  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:51.910576  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:51.910594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:51.923639  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:51.923676  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:52.014337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:52.014366  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:52.014384  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:52.106586  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:52.106617  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:52.154349  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:52.154376  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:54.715843  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:54.729644  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:54.729694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:54.766181  296043 cri.go:89] found id: ""
	I0214 22:01:54.766200  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.766210  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:54.766216  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:54.766276  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:54.808010  296043 cri.go:89] found id: ""
	I0214 22:01:54.808039  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.808050  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:54.808064  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:54.808130  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:54.856672  296043 cri.go:89] found id: ""
	I0214 22:01:54.856693  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.856711  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:54.856717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:54.856762  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:54.906801  296043 cri.go:89] found id: ""
	I0214 22:01:54.906820  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.906827  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:54.906833  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:54.906873  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:54.951444  296043 cri.go:89] found id: ""
	I0214 22:01:54.951467  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.951477  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:54.951485  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:54.951539  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:54.993431  296043 cri.go:89] found id: ""
	I0214 22:01:54.993457  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.993468  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:54.993476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:54.993520  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:55.040664  296043 cri.go:89] found id: ""
	I0214 22:01:55.040714  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.040726  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:55.040735  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:55.040793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:55.080280  296043 cri.go:89] found id: ""
	I0214 22:01:55.080309  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.080317  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:55.080327  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:55.080342  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:55.141974  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:55.142012  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:55.159407  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:55.159436  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:55.238973  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:55.238998  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:55.239010  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:55.326876  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:55.326907  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:57.883816  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:57.898210  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:57.898270  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:57.933120  296043 cri.go:89] found id: ""
	I0214 22:01:57.933146  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.933155  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:57.933163  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:57.933219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:57.968047  296043 cri.go:89] found id: ""
	I0214 22:01:57.968072  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.968089  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:57.968096  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:57.968150  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:58.007167  296043 cri.go:89] found id: ""
	I0214 22:01:58.007194  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.007205  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:58.007213  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:58.007263  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:58.044221  296043 cri.go:89] found id: ""
	I0214 22:01:58.044249  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.044259  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:58.044270  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:58.044322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:58.079197  296043 cri.go:89] found id: ""
	I0214 22:01:58.079226  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.079237  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:58.079246  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:58.079308  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:58.115726  296043 cri.go:89] found id: ""
	I0214 22:01:58.115757  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.115768  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:58.115779  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:58.115833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:58.151192  296043 cri.go:89] found id: ""
	I0214 22:01:58.151218  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.151226  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:58.151231  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:58.151279  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:58.186512  296043 cri.go:89] found id: ""
	I0214 22:01:58.186531  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.186539  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:58.186548  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:58.186559  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:58.225500  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:58.225528  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:58.273842  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:58.273869  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:58.297373  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:58.297401  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:58.403111  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:58.403131  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:58.403155  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:00.996658  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:01.013323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:01.013388  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:01.054606  296043 cri.go:89] found id: ""
	I0214 22:02:01.054647  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.054659  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:01.054667  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:01.054729  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:01.091830  296043 cri.go:89] found id: ""
	I0214 22:02:01.091860  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.091870  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:01.091878  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:01.091933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:01.127100  296043 cri.go:89] found id: ""
	I0214 22:02:01.127126  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.127133  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:01.127139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:01.127176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:01.160268  296043 cri.go:89] found id: ""
	I0214 22:02:01.160291  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.160298  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:01.160304  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:01.160354  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:01.192244  296043 cri.go:89] found id: ""
	I0214 22:02:01.192277  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.192290  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:01.192301  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:01.192372  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:01.226746  296043 cri.go:89] found id: ""
	I0214 22:02:01.226777  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.226787  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:01.226797  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:01.226848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:01.264235  296043 cri.go:89] found id: ""
	I0214 22:02:01.264257  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.264266  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:01.264274  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:01.264325  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:01.299082  296043 cri.go:89] found id: ""
	I0214 22:02:01.299107  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.299119  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:01.299137  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:01.299152  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:01.374067  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:01.374087  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:01.374100  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:01.466814  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:01.466842  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:01.508566  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:01.508591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:01.565286  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:01.565318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.079276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:04.098100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:04.098168  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:04.148307  296043 cri.go:89] found id: ""
	I0214 22:02:04.148338  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.148347  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:04.148353  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:04.148401  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:04.182456  296043 cri.go:89] found id: ""
	I0214 22:02:04.182483  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.182493  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:04.182500  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:04.182548  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:04.222072  296043 cri.go:89] found id: ""
	I0214 22:02:04.222099  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.222107  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:04.222112  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:04.222155  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:04.255053  296043 cri.go:89] found id: ""
	I0214 22:02:04.255082  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.255092  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:04.255100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:04.255154  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:04.293951  296043 cri.go:89] found id: ""
	I0214 22:02:04.293982  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.293991  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:04.293998  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:04.294051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:04.334092  296043 cri.go:89] found id: ""
	I0214 22:02:04.334115  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.334123  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:04.334130  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:04.334179  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:04.366129  296043 cri.go:89] found id: ""
	I0214 22:02:04.366148  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.366160  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:04.366166  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:04.366207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:04.398508  296043 cri.go:89] found id: ""
	I0214 22:02:04.398532  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.398541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:04.398554  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:04.398567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:04.446518  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:04.446547  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.459347  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:04.459368  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:04.535181  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:04.535198  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:04.535212  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:04.608858  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:04.608891  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:07.150996  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:07.164414  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:07.164466  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:07.198549  296043 cri.go:89] found id: ""
	I0214 22:02:07.198571  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.198579  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:07.198585  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:07.198644  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:07.231429  296043 cri.go:89] found id: ""
	I0214 22:02:07.231454  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.231465  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:07.231472  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:07.231527  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:07.262244  296043 cri.go:89] found id: ""
	I0214 22:02:07.262266  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.262273  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:07.262278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:07.262322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:07.292654  296043 cri.go:89] found id: ""
	I0214 22:02:07.292670  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.292677  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:07.292686  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:07.292731  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:07.325893  296043 cri.go:89] found id: ""
	I0214 22:02:07.325911  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.325918  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:07.325923  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:07.325961  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:07.358776  296043 cri.go:89] found id: ""
	I0214 22:02:07.358799  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.358806  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:07.358811  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:07.358855  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:07.392029  296043 cri.go:89] found id: ""
	I0214 22:02:07.392052  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.392062  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:07.392073  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:07.392132  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:07.423080  296043 cri.go:89] found id: ""
	I0214 22:02:07.423105  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.423115  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:07.423128  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:07.423142  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:07.473625  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:07.473649  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:07.486487  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:07.486510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:07.550364  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:07.550387  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:07.550400  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:07.620727  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:07.620750  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:10.158575  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:10.171139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:10.171189  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:10.203796  296043 cri.go:89] found id: ""
	I0214 22:02:10.203825  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.203837  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:10.203847  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:10.203905  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:10.235261  296043 cri.go:89] found id: ""
	I0214 22:02:10.235279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.235287  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:10.235292  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:10.235331  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:10.267017  296043 cri.go:89] found id: ""
	I0214 22:02:10.267037  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.267044  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:10.267052  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:10.267110  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:10.298100  296043 cri.go:89] found id: ""
	I0214 22:02:10.298121  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.298127  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:10.298133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:10.298173  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:10.330163  296043 cri.go:89] found id: ""
	I0214 22:02:10.330189  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.330196  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:10.330205  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:10.330257  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:10.363253  296043 cri.go:89] found id: ""
	I0214 22:02:10.363279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.363287  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:10.363293  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:10.363345  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:10.393052  296043 cri.go:89] found id: ""
	I0214 22:02:10.393073  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.393081  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:10.393086  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:10.393124  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:10.423261  296043 cri.go:89] found id: ""
	I0214 22:02:10.423284  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.423292  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:10.423302  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:10.423314  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:10.474817  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:10.474839  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:10.487089  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:10.487117  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:10.552798  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:10.552818  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:10.552827  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:10.633678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:10.633700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:13.175779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:13.188862  296043 kubeadm.go:593] duration metric: took 4m4.534890262s to restartPrimaryControlPlane
	W0214 22:02:13.188929  296043 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0214 22:02:13.188953  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:02:14.903694  296043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.714713868s)
	I0214 22:02:14.903774  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:02:14.917520  296043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:02:14.927114  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:02:14.936531  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:02:14.936548  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:02:14.936593  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:02:14.945506  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:02:14.945543  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:02:14.954573  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:02:14.963268  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:02:14.963308  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:02:14.972385  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.981144  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:02:14.981190  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.990181  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:02:14.998739  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:02:14.998781  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:02:15.007880  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:02:15.079968  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:02:15.080063  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:02:15.227132  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:02:15.227264  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:02:15.227363  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:02:15.399613  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:02:15.401413  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:02:15.401514  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:02:15.401584  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:02:15.401699  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:02:15.401787  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:02:15.401887  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:02:15.403287  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:02:15.403395  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:02:15.403485  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:02:15.403584  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:02:15.403691  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:02:15.403760  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:02:15.403854  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:02:15.575946  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:02:15.646531  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:02:16.039563  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:02:16.210385  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:02:16.225322  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:02:16.226388  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:02:16.226445  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:02:16.354308  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:02:16.356102  296043 out.go:235]   - Booting up control plane ...
	I0214 22:02:16.356211  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:02:16.360283  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:02:16.361731  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:02:16.362515  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:02:16.373807  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:02:56.375481  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:02:56.376996  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:02:56.377215  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:01.377539  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:01.377722  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:11.378071  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:11.378255  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:31.379013  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:31.379253  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.380898  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:11.381134  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.381161  296043 kubeadm.go:310] 
	I0214 22:04:11.381223  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:04:11.381276  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:04:11.381287  296043 kubeadm.go:310] 
	I0214 22:04:11.381330  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:04:11.381386  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:04:11.381508  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:04:11.381517  296043 kubeadm.go:310] 
	I0214 22:04:11.381610  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:04:11.381661  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:04:11.381706  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:04:11.381713  296043 kubeadm.go:310] 
	I0214 22:04:11.381844  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:04:11.381962  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:04:11.381985  296043 kubeadm.go:310] 
	I0214 22:04:11.382159  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:04:11.382269  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:04:11.382378  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:04:11.382478  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:04:11.382488  296043 kubeadm.go:310] 
	I0214 22:04:11.383608  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:04:11.383712  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:04:11.383805  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 22:04:11.383962  296043 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 22:04:11.384029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:04:11.847932  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:04:11.862250  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:04:11.872076  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:04:11.872096  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:04:11.872141  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:04:11.881248  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:04:11.881299  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:04:11.890591  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:04:11.899561  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:04:11.899609  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:04:11.908818  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.917642  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:04:11.917688  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.926938  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:04:11.936007  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:04:11.936053  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:04:11.945314  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:04:12.015411  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:04:12.015466  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:04:12.151668  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:04:12.151844  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:04:12.151988  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:04:12.322327  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:04:12.324344  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:04:12.324451  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:04:12.324530  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:04:12.324659  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:04:12.324761  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:04:12.324855  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:04:12.324934  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:04:12.325109  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:04:12.325566  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:04:12.325866  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:04:12.326334  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:04:12.326391  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:04:12.326453  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:04:12.468450  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:04:12.741068  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:04:12.905628  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:04:13.075487  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:04:13.093105  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:04:13.093840  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:04:13.093897  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:04:13.225868  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:04:13.227602  296043 out.go:235]   - Booting up control plane ...
	I0214 22:04:13.227715  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:04:13.235626  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:04:13.238592  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:04:13.239495  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:04:13.246539  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:04:53.249274  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:04:53.249602  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:53.249764  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:58.250244  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:58.250486  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:08.251032  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:08.251247  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:28.253223  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:28.253527  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252450  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:06:08.252752  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252786  296043 kubeadm.go:310] 
	I0214 22:06:08.252841  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:06:08.252891  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:06:08.252909  296043 kubeadm.go:310] 
	I0214 22:06:08.252957  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:06:08.253010  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:06:08.253150  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:06:08.253160  296043 kubeadm.go:310] 
	I0214 22:06:08.253287  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:06:08.253332  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:06:08.253372  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:06:08.253403  296043 kubeadm.go:310] 
	I0214 22:06:08.253569  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:06:08.253692  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:06:08.253701  296043 kubeadm.go:310] 
	I0214 22:06:08.253861  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:06:08.253990  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:06:08.254095  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:06:08.254195  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:06:08.254206  296043 kubeadm.go:310] 
	I0214 22:06:08.254491  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:06:08.254637  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:06:08.254748  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 22:06:08.254848  296043 kubeadm.go:394] duration metric: took 7m59.662371118s to StartCluster
	I0214 22:06:08.254965  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:06:08.255027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:06:08.298673  296043 cri.go:89] found id: ""
	I0214 22:06:08.298694  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.298702  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:06:08.298709  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:06:08.298777  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:06:08.329697  296043 cri.go:89] found id: ""
	I0214 22:06:08.329717  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.329724  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:06:08.329729  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:06:08.329779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:06:08.360276  296043 cri.go:89] found id: ""
	I0214 22:06:08.360296  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.360304  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:06:08.360310  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:06:08.360370  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:06:08.391153  296043 cri.go:89] found id: ""
	I0214 22:06:08.391180  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.391188  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:06:08.391193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:06:08.391244  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:06:08.421880  296043 cri.go:89] found id: ""
	I0214 22:06:08.421907  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.421917  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:06:08.421924  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:06:08.421974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:06:08.453558  296043 cri.go:89] found id: ""
	I0214 22:06:08.453578  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.453587  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:06:08.453594  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:06:08.453641  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:06:08.495718  296043 cri.go:89] found id: ""
	I0214 22:06:08.495750  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.495761  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:06:08.495772  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:06:08.495845  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:06:08.542115  296043 cri.go:89] found id: ""
	I0214 22:06:08.542141  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.542152  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:06:08.542165  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:06:08.542180  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:06:08.605825  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:06:08.605851  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:06:08.621228  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:06:08.621251  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:06:08.696999  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:06:08.697025  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:06:08.697050  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:06:08.796690  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:06:08.796716  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:06:08.834010  296043 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 22:06:08.834068  296043 out.go:270] * 
	* 
	W0214 22:06:08.834153  296043 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.834166  296043 out.go:270] * 
	* 
	W0214 22:06:08.835011  296043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 22:06:08.838512  296043 out.go:201] 
	W0214 22:06:08.839577  296043 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.839631  296043 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 22:06:08.839655  296043 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 22:06:08.840885  296043 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-201745 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (268.982507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-201745 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-266997 sudo iptables                       | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo docker                         | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo find                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo crio                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-266997                                     | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 22:00:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 22:00:40.013497  304371 out.go:345] Setting OutFile to fd 1 ...
	I0214 22:00:40.013688  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013723  304371 out.go:358] Setting ErrFile to fd 2...
	I0214 22:00:40.013740  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013941  304371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 22:00:40.014539  304371 out.go:352] Setting JSON to false
	I0214 22:00:40.015878  304371 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9784,"bootTime":1739560656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 22:00:40.015969  304371 start.go:140] virtualization: kvm guest
	I0214 22:00:40.017995  304371 out.go:177] * [bridge-266997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 22:00:40.019548  304371 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 22:00:40.019559  304371 notify.go:220] Checking for updates...
	I0214 22:00:40.021770  304371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 22:00:40.022963  304371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:00:40.024165  304371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.025322  304371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 22:00:40.026557  304371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 22:00:40.028422  304371 config.go:182] Loaded profile config "enable-default-cni-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028571  304371 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028707  304371 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 22:00:40.028816  304371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 22:00:40.075364  304371 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 22:00:40.076500  304371 start.go:304] selected driver: kvm2
	I0214 22:00:40.076529  304371 start.go:908] validating driver "kvm2" against <nil>
	I0214 22:00:40.076547  304371 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 22:00:40.077631  304371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.077721  304371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 22:00:40.097536  304371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 22:00:40.097586  304371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 22:00:40.097859  304371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:00:40.097901  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:00:40.097911  304371 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 22:00:40.097991  304371 start.go:347] cluster config:
	{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:40.098147  304371 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.099655  304371 out.go:177] * Starting "bridge-266997" primary control-plane node in "bridge-266997" cluster
	I0214 22:00:40.100707  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:40.100759  304371 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 22:00:40.100773  304371 cache.go:56] Caching tarball of preloaded images
	I0214 22:00:40.100872  304371 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 22:00:40.100888  304371 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 22:00:40.100998  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:00:40.101023  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json: {Name:mk956d7ec0a679c86c01d5e19aaca4ffe835db04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:40.101195  304371 start.go:360] acquireMachinesLock for bridge-266997: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 22:00:40.739410  304371 start.go:364] duration metric: took 638.071669ms to acquireMachinesLock for "bridge-266997"
	I0214 22:00:40.739470  304371 start.go:93] Provisioning new machine with config: &{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:00:40.739597  304371 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 22:00:38.638103  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638775  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has current primary IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638815  302662 main.go:141] libmachine: (flannel-266997) found domain IP: 192.168.61.227
	I0214 22:00:38.638837  302662 main.go:141] libmachine: (flannel-266997) reserving static IP address...
	I0214 22:00:38.639227  302662 main.go:141] libmachine: (flannel-266997) DBG | unable to find host DHCP lease matching {name: "flannel-266997", mac: "52:54:00:ee:24:91", ip: "192.168.61.227"} in network mk-flannel-266997
	I0214 22:00:38.720741  302662 main.go:141] libmachine: (flannel-266997) reserved static IP address 192.168.61.227 for domain flannel-266997
	I0214 22:00:38.720767  302662 main.go:141] libmachine: (flannel-266997) DBG | Getting to WaitForSSH function...
	I0214 22:00:38.720774  302662 main.go:141] libmachine: (flannel-266997) waiting for SSH...
	I0214 22:00:38.723657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724193  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.724222  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724376  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH client type: external
	I0214 22:00:38.724398  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa (-rw-------)
	I0214 22:00:38.724424  302662 main.go:141] libmachine: (flannel-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:00:38.724432  302662 main.go:141] libmachine: (flannel-266997) DBG | About to run SSH command:
	I0214 22:00:38.724443  302662 main.go:141] libmachine: (flannel-266997) DBG | exit 0
	I0214 22:00:38.855089  302662 main.go:141] libmachine: (flannel-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:00:38.855431  302662 main.go:141] libmachine: (flannel-266997) KVM machine creation complete
	I0214 22:00:38.855717  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:38.856304  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856540  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856736  302662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:00:38.856755  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:00:38.858099  302662 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:00:38.858126  302662 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:00:38.858133  302662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:00:38.858141  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.860473  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860742  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.860769  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860866  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.861047  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861239  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861397  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.861554  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.861789  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.861802  302662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:00:38.987056  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:38.987080  302662 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:00:38.987090  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.991287  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.991867  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.991901  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.992117  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.992347  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992546  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992737  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.992969  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.993199  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.993218  302662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:00:39.120019  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:00:39.120118  302662 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:00:39.120133  302662 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:00:39.120144  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120439  302662 buildroot.go:166] provisioning hostname "flannel-266997"
	I0214 22:00:39.120468  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120637  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.123699  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279544  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.279574  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279895  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.280156  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280385  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.280752  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.280990  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.281008  302662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-266997 && echo "flannel-266997" | sudo tee /etc/hostname
	I0214 22:00:39.418566  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-266997
	
	I0214 22:00:39.418600  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.696405  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.696786  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.696816  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.697106  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.697346  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697519  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697673  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.697837  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.698062  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.698079  302662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:00:39.838034  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:39.838073  302662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:00:39.838101  302662 buildroot.go:174] setting up certificates
	I0214 22:00:39.838118  302662 provision.go:84] configureAuth start
	I0214 22:00:39.838134  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.838437  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:39.841947  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842398  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.842423  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842549  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.845575  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846164  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.846413  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846385  302662 provision.go:143] copyHostCerts
	I0214 22:00:39.846558  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:00:39.846578  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:00:39.846685  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:00:39.846828  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:00:39.846841  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:00:39.846885  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:00:39.846995  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:00:39.847008  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:00:39.847066  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:00:39.847177  302662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.flannel-266997 san=[127.0.0.1 192.168.61.227 flannel-266997 localhost minikube]
	I0214 22:00:40.050848  302662 provision.go:177] copyRemoteCerts
	I0214 22:00:40.050928  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:00:40.050984  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.054657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055071  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.055100  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055790  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.056179  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.056663  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.056830  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.157340  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:00:40.184601  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0214 22:00:40.210273  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 22:00:40.235456  302662 provision.go:87] duration metric: took 397.323852ms to configureAuth
	I0214 22:00:40.235484  302662 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:00:40.235682  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.235775  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.238280  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238712  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.238751  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238935  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.239137  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239310  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239478  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.239662  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.239824  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.239838  302662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:00:40.477460  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:00:40.477495  302662 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:00:40.477529  302662 main.go:141] libmachine: (flannel-266997) Calling .GetURL
	I0214 22:00:40.478939  302662 main.go:141] libmachine: (flannel-266997) DBG | using libvirt version 6000000
	I0214 22:00:40.481396  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481778  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.481807  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481953  302662 main.go:141] libmachine: Docker is up and running!
	I0214 22:00:40.481977  302662 main.go:141] libmachine: Reticulating splines...
	I0214 22:00:40.481987  302662 client.go:171] duration metric: took 23.84148991s to LocalClient.Create
	I0214 22:00:40.482019  302662 start.go:167] duration metric: took 23.841568434s to libmachine.API.Create "flannel-266997"
	I0214 22:00:40.482032  302662 start.go:293] postStartSetup for "flannel-266997" (driver="kvm2")
	I0214 22:00:40.482052  302662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:00:40.482086  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.482376  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:00:40.482407  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.484968  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485363  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.485394  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.485749  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.485890  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.486025  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.573729  302662 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:00:40.577977  302662 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:00:40.578003  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:00:40.578075  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:00:40.578180  302662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:00:40.578302  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:00:40.588072  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:40.612075  302662 start.go:296] duration metric: took 130.020062ms for postStartSetup
	I0214 22:00:40.612132  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:40.612708  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.615427  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.615734  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.615764  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.616036  302662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/config.json ...
	I0214 22:00:40.616256  302662 start.go:128] duration metric: took 23.993767271s to createHost
	I0214 22:00:40.616279  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.618824  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619145  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.619172  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619365  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.619515  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619667  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619812  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.619942  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.620120  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.620135  302662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:00:40.739233  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570440.696234424
	
	I0214 22:00:40.739258  302662 fix.go:216] guest clock: 1739570440.696234424
	I0214 22:00:40.739268  302662 fix.go:229] Guest: 2025-02-14 22:00:40.696234424 +0000 UTC Remote: 2025-02-14 22:00:40.616269623 +0000 UTC m=+24.118806419 (delta=79.964801ms)
	I0214 22:00:40.739303  302662 fix.go:200] guest clock delta is within tolerance: 79.964801ms
	I0214 22:00:40.739310  302662 start.go:83] releasing machines lock for "flannel-266997", held for 24.116939765s
	I0214 22:00:40.739341  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.739624  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.742553  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.742948  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.742975  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.743235  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743808  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743985  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.744102  302662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:00:40.744175  302662 ssh_runner.go:195] Run: cat /version.json
	I0214 22:00:40.744198  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.744177  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.747113  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747256  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747420  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747485  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747553  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.747704  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.747663  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747759  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747849  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.747915  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.748050  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.748071  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.748190  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.748337  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.836766  302662 ssh_runner.go:195] Run: systemctl --version
	I0214 22:00:40.864976  302662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:00:41.030697  302662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:00:41.037406  302662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:00:41.037479  302662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:00:41.054755  302662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:00:41.054780  302662 start.go:495] detecting cgroup driver to use...
	I0214 22:00:41.054846  302662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:00:41.070471  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:00:41.085648  302662 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:00:41.085703  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:00:41.101988  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:00:41.118492  302662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:00:41.258887  302662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:00:41.416252  302662 docker.go:233] disabling docker service ...
	I0214 22:00:41.416318  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:00:41.433330  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:00:41.447924  302662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W0214 22:00:36.876425  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:36.876444  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:36.876460  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:36.954714  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:36.954740  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:39.500037  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:39.520812  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:39.520889  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:39.562216  296043 cri.go:89] found id: ""
	I0214 22:00:39.562250  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.562263  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:39.562271  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:39.562336  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:39.601201  296043 cri.go:89] found id: ""
	I0214 22:00:39.601234  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.601247  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:39.601255  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:39.601315  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:39.640202  296043 cri.go:89] found id: ""
	I0214 22:00:39.640231  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.640242  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:39.640250  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:39.640307  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:39.674932  296043 cri.go:89] found id: ""
	I0214 22:00:39.674960  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.674972  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:39.674981  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:39.675042  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:39.724788  296043 cri.go:89] found id: ""
	I0214 22:00:39.724820  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.724833  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:39.724841  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:39.724908  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:39.771267  296043 cri.go:89] found id: ""
	I0214 22:00:39.771295  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.771306  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:39.771314  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:39.771369  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:39.810824  296043 cri.go:89] found id: ""
	I0214 22:00:39.810852  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.810864  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:39.810871  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:39.810933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:39.852769  296043 cri.go:89] found id: ""
	I0214 22:00:39.852794  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.852803  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:39.852815  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:39.852831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:39.906779  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:39.906808  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:39.924045  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:39.924072  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:40.027558  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:40.027580  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:40.027594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:40.130386  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:40.130415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:41.665522  302662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:00:41.808101  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:00:41.827287  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:00:41.846475  302662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:00:41.846535  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.858296  302662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:00:41.858365  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.871564  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.892941  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.914718  302662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:00:41.929404  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.943358  302662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.967621  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.981572  302662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:00:41.993282  302662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:00:41.993338  302662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:00:42.007298  302662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:00:42.020823  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:42.168987  302662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:00:42.522679  302662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:00:42.522753  302662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:00:42.527926  302662 start.go:563] Will wait 60s for crictl version
	I0214 22:00:42.528000  302662 ssh_runner.go:195] Run: which crictl
	I0214 22:00:42.532262  302662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:00:42.583646  302662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:00:42.583793  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.613308  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.651554  302662 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:00:40.740919  304371 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0214 22:00:40.741156  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:00:40.741214  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:00:40.758664  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0214 22:00:40.759104  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:00:40.759684  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:00:40.759711  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:00:40.760116  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:00:40.760351  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:00:40.760523  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:00:40.760689  304371 start.go:159] libmachine.API.Create for "bridge-266997" (driver="kvm2")
	I0214 22:00:40.760732  304371 client.go:168] LocalClient.Create starting
	I0214 22:00:40.760769  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 22:00:40.760801  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760820  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760889  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 22:00:40.760925  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760947  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760973  304371 main.go:141] libmachine: Running pre-create checks...
	I0214 22:00:40.760985  304371 main.go:141] libmachine: (bridge-266997) Calling .PreCreateCheck
	I0214 22:00:40.761428  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:00:40.761930  304371 main.go:141] libmachine: Creating machine...
	I0214 22:00:40.761945  304371 main.go:141] libmachine: (bridge-266997) Calling .Create
	I0214 22:00:40.762102  304371 main.go:141] libmachine: (bridge-266997) creating KVM machine...
	I0214 22:00:40.762121  304371 main.go:141] libmachine: (bridge-266997) creating network...
	I0214 22:00:40.763213  304371 main.go:141] libmachine: (bridge-266997) DBG | found existing default KVM network
	I0214 22:00:40.764445  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.764318  304393 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:fa:84} reservation:<nil>}
	I0214 22:00:40.765726  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.765653  304393 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266bc0}
	I0214 22:00:40.765754  304371 main.go:141] libmachine: (bridge-266997) DBG | created network xml: 
	I0214 22:00:40.765764  304371 main.go:141] libmachine: (bridge-266997) DBG | <network>
	I0214 22:00:40.765774  304371 main.go:141] libmachine: (bridge-266997) DBG |   <name>mk-bridge-266997</name>
	I0214 22:00:40.765780  304371 main.go:141] libmachine: (bridge-266997) DBG |   <dns enable='no'/>
	I0214 22:00:40.765786  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765794  304371 main.go:141] libmachine: (bridge-266997) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0214 22:00:40.765810  304371 main.go:141] libmachine: (bridge-266997) DBG |     <dhcp>
	I0214 22:00:40.765819  304371 main.go:141] libmachine: (bridge-266997) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0214 22:00:40.765830  304371 main.go:141] libmachine: (bridge-266997) DBG |     </dhcp>
	I0214 22:00:40.765836  304371 main.go:141] libmachine: (bridge-266997) DBG |   </ip>
	I0214 22:00:40.765843  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765848  304371 main.go:141] libmachine: (bridge-266997) DBG | </network>
	I0214 22:00:40.765856  304371 main.go:141] libmachine: (bridge-266997) DBG | 
	I0214 22:00:40.770689  304371 main.go:141] libmachine: (bridge-266997) DBG | trying to create private KVM network mk-bridge-266997 192.168.50.0/24...
	I0214 22:00:40.854522  304371 main.go:141] libmachine: (bridge-266997) DBG | private KVM network mk-bridge-266997 192.168.50.0/24 created
	I0214 22:00:40.854555  304371 main.go:141] libmachine: (bridge-266997) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:40.854570  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.854493  304393 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.854582  304371 main.go:141] libmachine: (bridge-266997) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 22:00:40.854672  304371 main.go:141] libmachine: (bridge-266997) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 22:00:41.215883  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.215729  304393 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa...
	I0214 22:00:41.309617  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309464  304393 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk...
	I0214 22:00:41.309654  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing magic tar header
	I0214 22:00:41.309668  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing SSH key tar header
	I0214 22:00:41.309681  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309616  304393 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:41.309770  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997
	I0214 22:00:41.309791  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 22:00:41.309807  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:41.309822  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 22:00:41.309835  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 (perms=drwx------)
	I0214 22:00:41.309848  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 22:00:41.309858  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 22:00:41.309871  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 22:00:41.309884  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 22:00:41.309910  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 22:00:41.309927  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 22:00:41.309938  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins
	I0214 22:00:41.309949  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home
	I0214 22:00:41.309959  304371 main.go:141] libmachine: (bridge-266997) DBG | skipping /home - not owner
	I0214 22:00:41.309969  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.311296  304371 main.go:141] libmachine: (bridge-266997) define libvirt domain using xml: 
	I0214 22:00:41.311319  304371 main.go:141] libmachine: (bridge-266997) <domain type='kvm'>
	I0214 22:00:41.311329  304371 main.go:141] libmachine: (bridge-266997)   <name>bridge-266997</name>
	I0214 22:00:41.311357  304371 main.go:141] libmachine: (bridge-266997)   <memory unit='MiB'>3072</memory>
	I0214 22:00:41.311407  304371 main.go:141] libmachine: (bridge-266997)   <vcpu>2</vcpu>
	I0214 22:00:41.311453  304371 main.go:141] libmachine: (bridge-266997)   <features>
	I0214 22:00:41.311464  304371 main.go:141] libmachine: (bridge-266997)     <acpi/>
	I0214 22:00:41.311473  304371 main.go:141] libmachine: (bridge-266997)     <apic/>
	I0214 22:00:41.311482  304371 main.go:141] libmachine: (bridge-266997)     <pae/>
	I0214 22:00:41.311492  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311501  304371 main.go:141] libmachine: (bridge-266997)   </features>
	I0214 22:00:41.311522  304371 main.go:141] libmachine: (bridge-266997)   <cpu mode='host-passthrough'>
	I0214 22:00:41.311533  304371 main.go:141] libmachine: (bridge-266997)   
	I0214 22:00:41.311543  304371 main.go:141] libmachine: (bridge-266997)   </cpu>
	I0214 22:00:41.311556  304371 main.go:141] libmachine: (bridge-266997)   <os>
	I0214 22:00:41.311566  304371 main.go:141] libmachine: (bridge-266997)     <type>hvm</type>
	I0214 22:00:41.311575  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='cdrom'/>
	I0214 22:00:41.311585  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='hd'/>
	I0214 22:00:41.311597  304371 main.go:141] libmachine: (bridge-266997)     <bootmenu enable='no'/>
	I0214 22:00:41.311604  304371 main.go:141] libmachine: (bridge-266997)   </os>
	I0214 22:00:41.311615  304371 main.go:141] libmachine: (bridge-266997)   <devices>
	I0214 22:00:41.311623  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='cdrom'>
	I0214 22:00:41.311640  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/boot2docker.iso'/>
	I0214 22:00:41.311651  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hdc' bus='scsi'/>
	I0214 22:00:41.311659  304371 main.go:141] libmachine: (bridge-266997)       <readonly/>
	I0214 22:00:41.311669  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311679  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='disk'>
	I0214 22:00:41.311691  304371 main.go:141] libmachine: (bridge-266997)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 22:00:41.311708  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk'/>
	I0214 22:00:41.311719  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hda' bus='virtio'/>
	I0214 22:00:41.311731  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311745  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311758  304371 main.go:141] libmachine: (bridge-266997)       <source network='mk-bridge-266997'/>
	I0214 22:00:41.311768  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311784  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311795  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311806  304371 main.go:141] libmachine: (bridge-266997)       <source network='default'/>
	I0214 22:00:41.311816  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311835  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311845  304371 main.go:141] libmachine: (bridge-266997)     <serial type='pty'>
	I0214 22:00:41.311854  304371 main.go:141] libmachine: (bridge-266997)       <target port='0'/>
	I0214 22:00:41.311863  304371 main.go:141] libmachine: (bridge-266997)     </serial>
	I0214 22:00:41.311871  304371 main.go:141] libmachine: (bridge-266997)     <console type='pty'>
	I0214 22:00:41.311882  304371 main.go:141] libmachine: (bridge-266997)       <target type='serial' port='0'/>
	I0214 22:00:41.311894  304371 main.go:141] libmachine: (bridge-266997)     </console>
	I0214 22:00:41.311904  304371 main.go:141] libmachine: (bridge-266997)     <rng model='virtio'>
	I0214 22:00:41.311913  304371 main.go:141] libmachine: (bridge-266997)       <backend model='random'>/dev/random</backend>
	I0214 22:00:41.311922  304371 main.go:141] libmachine: (bridge-266997)     </rng>
	I0214 22:00:41.311929  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311935  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311943  304371 main.go:141] libmachine: (bridge-266997)   </devices>
	I0214 22:00:41.311953  304371 main.go:141] libmachine: (bridge-266997) </domain>
	I0214 22:00:41.311963  304371 main.go:141] libmachine: (bridge-266997) 
	I0214 22:00:41.316746  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:64:b9:e2 in network default
	I0214 22:00:41.317498  304371 main.go:141] libmachine: (bridge-266997) starting domain...
	I0214 22:00:41.317522  304371 main.go:141] libmachine: (bridge-266997) ensuring networks are active...
	I0214 22:00:41.317534  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.318252  304371 main.go:141] libmachine: (bridge-266997) Ensuring network default is active
	I0214 22:00:41.318659  304371 main.go:141] libmachine: (bridge-266997) Ensuring network mk-bridge-266997 is active
	I0214 22:00:41.319251  304371 main.go:141] libmachine: (bridge-266997) getting domain XML...
	I0214 22:00:41.320056  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.741479  304371 main.go:141] libmachine: (bridge-266997) waiting for IP...
	I0214 22:00:41.742488  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.743161  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:41.743281  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.743162  304393 retry.go:31] will retry after 281.296096ms: waiting for domain to come up
	I0214 22:00:42.026644  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.027336  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.027373  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.027305  304393 retry.go:31] will retry after 320.245979ms: waiting for domain to come up
	I0214 22:00:42.348610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.349147  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.349189  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.349091  304393 retry.go:31] will retry after 386.466755ms: waiting for domain to come up
	I0214 22:00:42.737580  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.738183  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.738213  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.738129  304393 retry.go:31] will retry after 559.616616ms: waiting for domain to come up
	I0214 22:00:43.299023  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:43.299572  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:43.299604  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:43.299538  304393 retry.go:31] will retry after 737.634158ms: waiting for domain to come up
	I0214 22:00:44.038490  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.039152  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.039187  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.039125  304393 retry.go:31] will retry after 770.231832ms: waiting for domain to come up
	I0214 22:00:44.811167  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.811701  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.811735  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.811676  304393 retry.go:31] will retry after 1.145451756s: waiting for domain to come up
	I0214 22:00:42.652620  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:42.655747  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656123  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:42.656157  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656409  302662 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0214 22:00:42.660943  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:42.675829  302662 kubeadm.go:875] updating cluster {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:00:42.675939  302662 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:42.676015  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:42.716871  302662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:00:42.716942  302662 ssh_runner.go:195] Run: which lz4
	I0214 22:00:42.721755  302662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:00:42.726679  302662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:00:42.726706  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:00:44.256067  302662 crio.go:462] duration metric: took 1.53433582s to copy over tarball
	I0214 22:00:44.256172  302662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:00:42.679860  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:42.699140  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:42.699212  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:42.744951  296043 cri.go:89] found id: ""
	I0214 22:00:42.744980  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.744992  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:42.745002  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:42.745061  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:42.795928  296043 cri.go:89] found id: ""
	I0214 22:00:42.795960  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.795973  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:42.795981  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:42.796051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:42.850295  296043 cri.go:89] found id: ""
	I0214 22:00:42.850330  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.850344  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:42.850354  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:42.850427  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:42.913832  296043 cri.go:89] found id: ""
	I0214 22:00:42.913862  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.913874  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:42.913884  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:42.913947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:42.983499  296043 cri.go:89] found id: ""
	I0214 22:00:42.983589  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.983607  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:42.983615  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:42.983689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:43.037301  296043 cri.go:89] found id: ""
	I0214 22:00:43.037331  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.037343  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:43.037351  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:43.037419  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:43.084109  296043 cri.go:89] found id: ""
	I0214 22:00:43.084141  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.084153  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:43.084161  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:43.084233  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:43.139429  296043 cri.go:89] found id: ""
	I0214 22:00:43.139460  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.139473  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:43.139486  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:43.139503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:43.203986  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:43.204033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:43.221265  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:43.221297  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:43.326457  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:43.326485  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:43.326510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:43.450012  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:43.450053  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.020884  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:46.036692  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:46.036773  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:46.078455  296043 cri.go:89] found id: ""
	I0214 22:00:46.078496  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.078510  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:46.078521  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:46.078599  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:46.126385  296043 cri.go:89] found id: ""
	I0214 22:00:46.126418  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.126430  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:46.126438  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:46.126505  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:46.174790  296043 cri.go:89] found id: ""
	I0214 22:00:46.174823  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.174836  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:46.174844  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:46.174911  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:46.236219  296043 cri.go:89] found id: ""
	I0214 22:00:46.236264  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.236276  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:46.236284  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:46.236349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:46.279991  296043 cri.go:89] found id: ""
	I0214 22:00:46.280019  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.280031  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:46.280038  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:46.280112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:46.316834  296043 cri.go:89] found id: ""
	I0214 22:00:46.316866  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.316878  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:46.316887  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:46.316951  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:46.355156  296043 cri.go:89] found id: ""
	I0214 22:00:46.355183  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.355192  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:46.355198  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:46.355252  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:46.400157  296043 cri.go:89] found id: ""
	I0214 22:00:46.400184  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.400193  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:46.400204  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:46.400220  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.451755  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:46.451791  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:46.527757  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:46.527804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:46.544748  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:46.544789  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:46.629059  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:46.629085  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:46.629101  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:45.959707  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:45.960207  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:45.960270  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:45.960194  304393 retry.go:31] will retry after 1.00130128s: waiting for domain to come up
	I0214 22:00:46.962593  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:46.963008  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:46.963041  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:46.962955  304393 retry.go:31] will retry after 1.285042496s: waiting for domain to come up
	I0214 22:00:48.250543  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:48.250935  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:48.250965  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:48.250905  304393 retry.go:31] will retry after 1.446388395s: waiting for domain to come up
	I0214 22:00:49.698809  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:49.699471  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:49.699494  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:49.699386  304393 retry.go:31] will retry after 1.758522672s: waiting for domain to come up
	I0214 22:00:46.623241  302662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.367029567s)
	I0214 22:00:46.623279  302662 crio.go:469] duration metric: took 2.367170567s to extract the tarball
	I0214 22:00:46.623290  302662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:00:46.677690  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:46.722617  302662 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:00:46.722657  302662 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:00:46.722670  302662 kubeadm.go:926] updating node { 192.168.61.227 8443 v1.32.1 crio true true} ...
	I0214 22:00:46.722822  302662 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0214 22:00:46.722916  302662 ssh_runner.go:195] Run: crio config
	I0214 22:00:46.772485  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:46.772512  302662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:00:46.772537  302662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-266997 NodeName:flannel-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:00:46.772661  302662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:00:46.772737  302662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:00:46.784220  302662 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:00:46.784289  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:00:46.795155  302662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0214 22:00:46.811382  302662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:00:46.827059  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0214 22:00:46.843173  302662 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0214 22:00:46.846933  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:46.859321  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:46.987406  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:00:47.004349  302662 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997 for IP: 192.168.61.227
	I0214 22:00:47.004372  302662 certs.go:194] generating shared ca certs ...
	I0214 22:00:47.004394  302662 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.004581  302662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:00:47.004694  302662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:00:47.004720  302662 certs.go:256] generating profile certs ...
	I0214 22:00:47.004800  302662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key
	I0214 22:00:47.004820  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt with IP's: []
	I0214 22:00:47.107488  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt ...
	I0214 22:00:47.107515  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: {Name:mkcafc2c347155a87934cc2b1a02a2ae438963f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107679  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key ...
	I0214 22:00:47.107689  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key: {Name:mk4272dd225f468d379f0edd78b2d669ffde6d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107784  302662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247
	I0214 22:00:47.107805  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0214 22:00:47.253098  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 ...
	I0214 22:00:47.253126  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247: {Name:mk1eb945c33215ba17bdc46ffcf8840c7f3dd723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253276  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 ...
	I0214 22:00:47.253288  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247: {Name:mkaaf59e6a445fe3bbdd6b7d0c2fa8bb8ab97969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253362  302662 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt
	I0214 22:00:47.253431  302662 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key
	I0214 22:00:47.253483  302662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key
	I0214 22:00:47.253498  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt with IP's: []
	I0214 22:00:47.423779  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt ...
	I0214 22:00:47.423813  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt: {Name:mk6b216b0369b6fec0e56e8e85f07a87b56291e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.423984  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key ...
	I0214 22:00:47.423997  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key: {Name:mk7e5c6c7d7c32823cb9d28b264f6cfeaebe6642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.424190  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:00:47.424232  302662 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:00:47.424244  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:00:47.424269  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:00:47.424295  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:00:47.424323  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:00:47.424371  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:47.425017  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:00:47.450688  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:00:47.475301  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:00:47.506864  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:00:47.535303  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:00:47.558848  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:00:47.582259  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:00:47.605880  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 22:00:47.629346  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:00:47.655313  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:00:47.684140  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:00:47.711649  302662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:00:47.728204  302662 ssh_runner.go:195] Run: openssl version
	I0214 22:00:47.734993  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:00:47.745552  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.749952  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.750009  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.755881  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:00:47.766140  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:00:47.776438  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781213  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781254  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.788489  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:00:47.799309  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:00:47.809509  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.813957  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.814001  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.819446  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:00:47.829331  302662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:00:47.833329  302662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:00:47.833389  302662 kubeadm.go:392] StartCluster: {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:47.833488  302662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:00:47.833542  302662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:00:47.872065  302662 cri.go:89] found id: ""
	I0214 22:00:47.872175  302662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:00:47.886707  302662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:00:47.897518  302662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:00:47.906407  302662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:00:47.906422  302662 kubeadm.go:157] found existing configuration files:
	
	I0214 22:00:47.906468  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:00:47.917119  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:00:47.917169  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:00:47.927075  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:00:47.936360  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:00:47.936401  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:00:47.946326  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.958232  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:00:47.958271  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.970063  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:00:47.983821  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:00:47.983884  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:00:47.993655  302662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:00:48.149190  302662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:00:49.216868  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:49.235561  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:49.235639  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:49.291785  296043 cri.go:89] found id: ""
	I0214 22:00:49.291817  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.291830  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:49.291840  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:49.291901  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:49.340347  296043 cri.go:89] found id: ""
	I0214 22:00:49.340374  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.340385  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:49.340393  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:49.340446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:49.386999  296043 cri.go:89] found id: ""
	I0214 22:00:49.387030  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.387041  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:49.387048  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:49.387114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:49.433819  296043 cri.go:89] found id: ""
	I0214 22:00:49.433849  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.433861  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:49.433868  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:49.433930  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:49.477406  296043 cri.go:89] found id: ""
	I0214 22:00:49.477453  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.477467  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:49.477478  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:49.477560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:49.522581  296043 cri.go:89] found id: ""
	I0214 22:00:49.522618  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.522648  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:49.522657  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:49.522721  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:49.560370  296043 cri.go:89] found id: ""
	I0214 22:00:49.560399  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.560410  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:49.560418  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:49.560479  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:49.600705  296043 cri.go:89] found id: ""
	I0214 22:00:49.600738  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.600751  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:49.600765  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:49.600787  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:49.692921  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:49.693003  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:49.715093  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:49.715190  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:49.819499  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:49.819529  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:49.819546  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:49.955944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:49.955994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:51.459674  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:51.460265  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:51.460299  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:51.460228  304393 retry.go:31] will retry after 2.818661449s: waiting for domain to come up
	I0214 22:00:54.281066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:54.281541  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:54.281618  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:54.281543  304393 retry.go:31] will retry after 3.13231059s: waiting for domain to come up
	I0214 22:00:52.528580  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:52.545309  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:52.545394  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:52.587415  296043 cri.go:89] found id: ""
	I0214 22:00:52.587446  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.587458  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:52.587466  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:52.587534  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:52.647538  296043 cri.go:89] found id: ""
	I0214 22:00:52.647649  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.647668  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:52.647677  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:52.647749  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:52.700570  296043 cri.go:89] found id: ""
	I0214 22:00:52.700603  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.700615  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:52.700624  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:52.700687  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:52.740732  296043 cri.go:89] found id: ""
	I0214 22:00:52.740764  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.740775  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:52.740782  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:52.740846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:52.781456  296043 cri.go:89] found id: ""
	I0214 22:00:52.781491  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.781503  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:52.781512  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:52.781581  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:52.829342  296043 cri.go:89] found id: ""
	I0214 22:00:52.829380  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.829392  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:52.829400  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:52.829471  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:52.879000  296043 cri.go:89] found id: ""
	I0214 22:00:52.879033  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.879045  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:52.879053  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:52.879127  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:52.923620  296043 cri.go:89] found id: ""
	I0214 22:00:52.923667  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.923680  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:52.923698  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:52.923717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:53.052613  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:53.052665  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:53.105757  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:53.105848  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:53.188362  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:53.188408  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:53.210408  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:53.210462  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:53.308816  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:55.810467  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:55.825649  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:55.825701  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:55.861736  296043 cri.go:89] found id: ""
	I0214 22:00:55.861759  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.861769  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:55.861776  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:55.861826  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:55.903282  296043 cri.go:89] found id: ""
	I0214 22:00:55.903318  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.903330  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:55.903352  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:55.903423  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:55.948890  296043 cri.go:89] found id: ""
	I0214 22:00:55.948919  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.948930  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:55.948937  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:55.948992  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:55.994279  296043 cri.go:89] found id: ""
	I0214 22:00:55.994307  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.994316  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:55.994321  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:55.994376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:56.039497  296043 cri.go:89] found id: ""
	I0214 22:00:56.039539  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.039551  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:56.039563  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:56.039630  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:56.079255  296043 cri.go:89] found id: ""
	I0214 22:00:56.079284  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.079294  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:56.079303  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:56.079367  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:56.121581  296043 cri.go:89] found id: ""
	I0214 22:00:56.121610  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.121622  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:56.121630  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:56.121689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:56.175042  296043 cri.go:89] found id: ""
	I0214 22:00:56.175066  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.175076  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:56.175089  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:56.175103  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:56.229769  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:56.229804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:56.243975  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:56.244001  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:56.319958  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:56.319982  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:56.319996  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:56.406004  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:56.406031  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:58.451548  302662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:00:58.451629  302662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:00:58.451729  302662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:00:58.451841  302662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:00:58.451943  302662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:00:58.452016  302662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:00:58.453381  302662 out.go:235]   - Generating certificates and keys ...
	I0214 22:00:58.453484  302662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:00:58.453567  302662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:00:58.453655  302662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:00:58.453731  302662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:00:58.453819  302662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:00:58.453888  302662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:00:58.453955  302662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:00:58.454117  302662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454193  302662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:00:58.454361  302662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454457  302662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:00:58.454548  302662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:00:58.454610  302662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:00:58.454703  302662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:00:58.454782  302662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:00:58.454863  302662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:00:58.454943  302662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:00:58.455064  302662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:00:58.455162  302662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:00:58.455295  302662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:00:58.455393  302662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:00:58.457252  302662 out.go:235]   - Booting up control plane ...
	I0214 22:00:58.457378  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:00:58.457451  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:00:58.457518  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:00:58.457610  302662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:00:58.457721  302662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:00:58.457788  302662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:00:58.457914  302662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:00:58.458088  302662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:00:58.458149  302662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.319865ms
	I0214 22:00:58.458214  302662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:00:58.458290  302662 kubeadm.go:310] [api-check] The API server is healthy after 5.001402391s
	I0214 22:00:58.458460  302662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:00:58.458610  302662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:00:58.458708  302662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:00:58.458905  302662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:00:58.458986  302662 kubeadm.go:310] [bootstrap-token] Using token: i1fz0a.mthozpfw6j726kwk
	I0214 22:00:58.460106  302662 out.go:235]   - Configuring RBAC rules ...
	I0214 22:00:58.460212  302662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:00:58.460327  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:00:58.460501  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:00:58.460640  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:00:58.460789  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:00:58.460862  302662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:00:58.460961  302662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:00:58.460999  302662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:00:58.461050  302662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:00:58.461063  302662 kubeadm.go:310] 
	I0214 22:00:58.461122  302662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:00:58.461128  302662 kubeadm.go:310] 
	I0214 22:00:58.461201  302662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:00:58.461207  302662 kubeadm.go:310] 
	I0214 22:00:58.461228  302662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:00:58.461309  302662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:00:58.461378  302662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:00:58.461386  302662 kubeadm.go:310] 
	I0214 22:00:58.461462  302662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:00:58.461473  302662 kubeadm.go:310] 
	I0214 22:00:58.461518  302662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:00:58.461525  302662 kubeadm.go:310] 
	I0214 22:00:58.461568  302662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:00:58.461647  302662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:00:58.461725  302662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:00:58.461733  302662 kubeadm.go:310] 
	I0214 22:00:58.461811  302662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:00:58.461891  302662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:00:58.461898  302662 kubeadm.go:310] 
	I0214 22:00:58.462022  302662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462119  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:00:58.462141  302662 kubeadm.go:310] 	--control-plane 
	I0214 22:00:58.462144  302662 kubeadm.go:310] 
	I0214 22:00:58.462225  302662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:00:58.462241  302662 kubeadm.go:310] 
	I0214 22:00:58.462339  302662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462459  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:00:58.462474  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:58.463742  302662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0214 22:00:57.415007  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:57.415501  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:57.415568  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:57.415492  304393 retry.go:31] will retry after 5.136891997s: waiting for domain to come up
	I0214 22:00:58.464845  302662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 22:00:58.471373  302662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0214 22:00:58.471395  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0214 22:00:58.493635  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 22:00:59.054047  302662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:00:59.054126  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.054208  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-266997 minikube.k8s.io/updated_at=2025_02_14T22_00_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=flannel-266997 minikube.k8s.io/primary=true
	I0214 22:00:59.094360  302662 ops.go:34] apiserver oom_adj: -16
	I0214 22:00:59.226069  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.727014  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.226853  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.726232  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.226169  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:58.959819  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:58.975738  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:58.975799  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:59.016692  296043 cri.go:89] found id: ""
	I0214 22:00:59.016722  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.016734  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:59.016742  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:59.016794  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:59.056462  296043 cri.go:89] found id: ""
	I0214 22:00:59.056486  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.056495  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:59.056504  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:59.056554  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:59.102865  296043 cri.go:89] found id: ""
	I0214 22:00:59.102893  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.102904  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:59.102911  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:59.102977  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:59.139163  296043 cri.go:89] found id: ""
	I0214 22:00:59.139189  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.139199  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:59.139204  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:59.139256  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:59.184113  296043 cri.go:89] found id: ""
	I0214 22:00:59.184142  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.184153  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:59.184160  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:59.184226  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:59.231073  296043 cri.go:89] found id: ""
	I0214 22:00:59.231104  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.231113  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:59.231123  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:59.231304  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:59.284699  296043 cri.go:89] found id: ""
	I0214 22:00:59.284723  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.284733  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:59.284741  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:59.284793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:59.337079  296043 cri.go:89] found id: ""
	I0214 22:00:59.337100  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.337107  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:59.337116  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:59.337133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:59.410337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:59.410365  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:59.410380  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:59.492678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:59.492710  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:59.535993  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:59.536022  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:59.596863  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:59.596889  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:01.726818  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.829407  302662 kubeadm.go:1105] duration metric: took 2.775341982s to wait for elevateKubeSystemPrivileges
	I0214 22:01:01.829439  302662 kubeadm.go:394] duration metric: took 13.996054167s to StartCluster
	I0214 22:01:01.829456  302662 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.829525  302662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:01.831145  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.831377  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:01.831394  302662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:01.831459  302662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:01.831554  302662 addons.go:69] Setting storage-provisioner=true in profile "flannel-266997"
	I0214 22:01:01.831572  302662 addons.go:238] Setting addon storage-provisioner=true in "flannel-266997"
	I0214 22:01:01.831603  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.831596  302662 addons.go:69] Setting default-storageclass=true in profile "flannel-266997"
	I0214 22:01:01.831628  302662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-266997"
	I0214 22:01:01.831660  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:01.832023  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832059  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832025  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832148  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832802  302662 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:01.833905  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:01.852906  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0214 22:01:01.853018  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0214 22:01:01.853380  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853592  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853990  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854005  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854121  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854144  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854347  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854575  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854851  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.854853  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.854886  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.858344  302662 addons.go:238] Setting addon default-storageclass=true in "flannel-266997"
	I0214 22:01:01.858420  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.858836  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.858889  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.870725  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0214 22:01:01.871213  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.871699  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.871721  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.872069  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.872261  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.873845  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.875386  302662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:01.876555  302662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:01.876577  302662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:01.876594  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.879497  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.879905  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.879931  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.880082  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.880247  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0214 22:01:01.880408  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.880539  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.880643  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:01.880960  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.881434  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.881453  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.881864  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.882412  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.882463  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.898239  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0214 22:01:01.898679  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.899246  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.899268  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.899656  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.899837  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.901209  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.901385  302662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:01.901402  302662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:01.901419  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.903666  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.903938  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.904002  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.904165  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.904327  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.904465  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.904593  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:02.010213  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:02.068737  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:02.254658  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:02.280477  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:02.558819  302662 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:02.560262  302662 node_ready.go:35] waiting up to 15m0s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:03.001707  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001737  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.001737  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001748  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002000  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002015  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002024  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002031  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002103  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002117  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002126  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002133  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002253  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002271  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.004236  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.004250  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.004267  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.012492  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.012514  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.012788  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.012805  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.012820  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.014783  302662 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:02.553773  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554344  304371 main.go:141] libmachine: (bridge-266997) found domain IP: 192.168.50.81
	I0214 22:01:02.554373  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has current primary IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554391  304371 main.go:141] libmachine: (bridge-266997) reserving static IP address...
	I0214 22:01:02.554641  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find host DHCP lease matching {name: "bridge-266997", mac: "52:54:00:b2:15:b0", ip: "192.168.50.81"} in network mk-bridge-266997
	I0214 22:01:02.642992  304371 main.go:141] libmachine: (bridge-266997) DBG | Getting to WaitForSSH function...
	I0214 22:01:02.643034  304371 main.go:141] libmachine: (bridge-266997) reserved static IP address 192.168.50.81 for domain bridge-266997
	I0214 22:01:02.643044  304371 main.go:141] libmachine: (bridge-266997) waiting for SSH...
	I0214 22:01:02.646143  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646598  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.646647  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646923  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH client type: external
	I0214 22:01:02.646961  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa (-rw-------)
	I0214 22:01:02.647011  304371 main.go:141] libmachine: (bridge-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:01:02.647024  304371 main.go:141] libmachine: (bridge-266997) DBG | About to run SSH command:
	I0214 22:01:02.647035  304371 main.go:141] libmachine: (bridge-266997) DBG | exit 0
	I0214 22:01:02.788308  304371 main.go:141] libmachine: (bridge-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:01:02.788649  304371 main.go:141] libmachine: (bridge-266997) KVM machine creation complete
	I0214 22:01:02.789044  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:02.789606  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789750  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789927  304371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:01:02.789946  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:02.791392  304371 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:01:02.791405  304371 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:01:02.791410  304371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:01:02.791416  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.793977  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794285  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.794302  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794418  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.794553  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794709  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794828  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.794971  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.795189  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.795201  304371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:01:02.909895  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:02.909920  304371 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:01:02.909929  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.912696  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913040  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.913066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913200  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.913439  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913647  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913796  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.913932  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.914103  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.914113  304371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:01:03.028655  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:01:03.028744  304371 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:01:03.028760  304371 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:01:03.028776  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029006  304371 buildroot.go:166] provisioning hostname "bridge-266997"
	I0214 22:01:03.029030  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029238  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.032183  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032556  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.032589  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032715  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.032907  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033059  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033225  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.033391  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.033602  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.033619  304371 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-266997 && echo "bridge-266997" | sudo tee /etc/hostname
	I0214 22:01:03.166933  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-266997
	
	I0214 22:01:03.166960  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.169777  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170149  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.170173  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170404  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.170597  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170926  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.171070  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.171304  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.171325  304371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:01:03.303955  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:03.303990  304371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:01:03.304021  304371 buildroot.go:174] setting up certificates
	I0214 22:01:03.304040  304371 provision.go:84] configureAuth start
	I0214 22:01:03.304054  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.304376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:03.307438  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.307857  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.307885  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.308035  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.310496  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.310856  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.310903  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.311001  304371 provision.go:143] copyHostCerts
	I0214 22:01:03.311081  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:01:03.311103  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:01:03.311172  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:01:03.311315  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:01:03.311336  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:01:03.311374  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:01:03.311492  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:01:03.311506  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:01:03.311538  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:01:03.311643  304371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.bridge-266997 san=[127.0.0.1 192.168.50.81 bridge-266997 localhost minikube]
	I0214 22:01:03.424494  304371 provision.go:177] copyRemoteCerts
	I0214 22:01:03.424546  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:01:03.424572  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.426781  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427138  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.427178  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427331  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.427484  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.427596  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.427715  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.517135  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 22:01:03.547506  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:01:03.579546  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 22:01:03.608150  304371 provision.go:87] duration metric: took 304.098585ms to configureAuth
	I0214 22:01:03.608174  304371 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:01:03.608327  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:03.608399  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.610851  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611181  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.611213  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611355  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.611503  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611641  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611754  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.611923  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.612153  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.612174  304371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:01:03.877480  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:01:03.877509  304371 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:01:03.877519  304371 main.go:141] libmachine: (bridge-266997) Calling .GetURL
	I0214 22:01:03.878693  304371 main.go:141] libmachine: (bridge-266997) DBG | using libvirt version 6000000
	I0214 22:01:03.881358  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.881777  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.881808  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.882015  304371 main.go:141] libmachine: Docker is up and running!
	I0214 22:01:03.882031  304371 main.go:141] libmachine: Reticulating splines...
	I0214 22:01:03.882040  304371 client.go:171] duration metric: took 23.121294706s to LocalClient.Create
	I0214 22:01:03.882063  304371 start.go:167] duration metric: took 23.121376335s to libmachine.API.Create "bridge-266997"
	I0214 22:01:03.882075  304371 start.go:293] postStartSetup for "bridge-266997" (driver="kvm2")
	I0214 22:01:03.882086  304371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:01:03.882116  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:03.882342  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:01:03.882376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.884877  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885218  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.885239  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885378  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.885589  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.885735  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.885845  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.976177  304371 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:01:03.980618  304371 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:01:03.980646  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:01:03.980710  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:01:03.980821  304371 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:01:03.980943  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:01:03.991483  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:04.025466  304371 start.go:296] duration metric: took 143.372996ms for postStartSetup
	I0214 22:01:04.025536  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:04.026327  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.029635  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030033  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.030057  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030352  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:01:04.030586  304371 start.go:128] duration metric: took 23.29097433s to createHost
	I0214 22:01:04.030640  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.033610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.033973  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.033998  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.034160  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.034303  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034507  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034685  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.034832  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:04.035026  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:04.035041  304371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:01:04.164811  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570464.136926718
	
	I0214 22:01:04.164832  304371 fix.go:216] guest clock: 1739570464.136926718
	I0214 22:01:04.164842  304371 fix.go:229] Guest: 2025-02-14 22:01:04.136926718 +0000 UTC Remote: 2025-02-14 22:01:04.030601008 +0000 UTC m=+24.065400357 (delta=106.32571ms)
	I0214 22:01:04.164866  304371 fix.go:200] guest clock delta is within tolerance: 106.32571ms
	I0214 22:01:04.164873  304371 start.go:83] releasing machines lock for "bridge-266997", held for 23.425433669s
	I0214 22:01:04.164896  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.165166  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.170113  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170541  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.170570  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170778  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171367  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171550  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171638  304371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:01:04.171684  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.171762  304371 ssh_runner.go:195] Run: cat /version.json
	I0214 22:01:04.171789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.174819  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175456  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.175481  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175607  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.175712  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.175787  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.175855  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.180293  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180297  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.180332  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.180351  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180558  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.180770  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.180935  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.285108  304371 ssh_runner.go:195] Run: systemctl --version
	I0214 22:01:04.293451  304371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:01:04.463259  304371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:01:04.469147  304371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:01:04.469201  304371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:01:04.484729  304371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:01:04.484747  304371 start.go:495] detecting cgroup driver to use...
	I0214 22:01:04.484800  304371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:01:04.502450  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:01:04.515492  304371 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:01:04.515540  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:01:04.528128  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:01:04.540475  304371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:01:04.666826  304371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:01:04.822228  304371 docker.go:233] disabling docker service ...
	I0214 22:01:04.822296  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:01:04.835915  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:01:04.848421  304371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 22:01:04.978701  304371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:01:05.096321  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:01:05.109638  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:01:05.127245  304371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:01:05.127289  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.137128  304371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:01:05.137171  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.149215  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.161652  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.173632  304371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:01:05.184990  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.195432  304371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.211772  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.222080  304371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:01:05.231350  304371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:01:05.231393  304371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:01:05.244531  304371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:01:05.253659  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:05.368821  304371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:01:05.484555  304371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:01:05.484625  304371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:01:05.490439  304371 start.go:563] Will wait 60s for crictl version
	I0214 22:01:05.490512  304371 ssh_runner.go:195] Run: which crictl
	I0214 22:01:05.495575  304371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:01:05.546437  304371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:01:05.546517  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.585123  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.622891  304371 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:01:03.016157  302662 addons.go:514] duration metric: took 1.184704963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:03.064160  302662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-266997" context rescaled to 1 replicas
	W0214 22:01:04.565870  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:02.111615  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:02.130034  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:02.130098  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:02.167633  296043 cri.go:89] found id: ""
	I0214 22:01:02.167669  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.167679  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:02.167687  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:02.167754  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:02.206752  296043 cri.go:89] found id: ""
	I0214 22:01:02.206778  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.206787  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:02.206793  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:02.206848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:02.242991  296043 cri.go:89] found id: ""
	I0214 22:01:02.243021  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.243033  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:02.243045  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:02.243112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:02.284141  296043 cri.go:89] found id: ""
	I0214 22:01:02.284164  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.284172  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:02.284178  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:02.284217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:02.329547  296043 cri.go:89] found id: ""
	I0214 22:01:02.329570  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.329577  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:02.329583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:02.329627  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:02.370731  296043 cri.go:89] found id: ""
	I0214 22:01:02.370758  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.370769  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:02.370778  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:02.370834  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:02.419069  296043 cri.go:89] found id: ""
	I0214 22:01:02.419102  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.419114  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:02.419122  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:02.419199  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:02.464600  296043 cri.go:89] found id: ""
	I0214 22:01:02.464636  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.464655  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:02.464670  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:02.464690  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:02.480854  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:02.480890  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:02.572148  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:02.572175  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:02.572191  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:02.686587  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:02.686646  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:02.734413  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:02.734443  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.297012  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:05.310239  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:05.310303  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:05.344855  296043 cri.go:89] found id: ""
	I0214 22:01:05.344884  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.344895  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:05.344905  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:05.344962  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:05.390466  296043 cri.go:89] found id: ""
	I0214 22:01:05.390498  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.390510  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:05.390518  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:05.390575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:05.442562  296043 cri.go:89] found id: ""
	I0214 22:01:05.442598  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.442611  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:05.442619  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:05.442707  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:05.482534  296043 cri.go:89] found id: ""
	I0214 22:01:05.482562  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.482577  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:05.482583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:05.482659  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:05.526775  296043 cri.go:89] found id: ""
	I0214 22:01:05.526802  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.526813  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:05.526821  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:05.526887  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:05.566945  296043 cri.go:89] found id: ""
	I0214 22:01:05.566971  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.566979  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:05.566991  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:05.567050  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:05.610803  296043 cri.go:89] found id: ""
	I0214 22:01:05.610836  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.610849  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:05.610857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:05.610934  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:05.658446  296043 cri.go:89] found id: ""
	I0214 22:01:05.658475  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.658485  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:05.658497  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:05.658512  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:05.731902  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:05.731929  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:05.731942  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:05.842065  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:05.842098  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:05.903308  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:05.903343  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.975417  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:05.975516  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:05.623928  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:05.627346  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.627929  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:05.627961  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.628196  304371 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0214 22:01:05.633410  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:05.650954  304371 kubeadm.go:875] updating cluster {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:01:05.651104  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:01:05.651162  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:05.701425  304371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:01:05.701507  304371 ssh_runner.go:195] Run: which lz4
	I0214 22:01:05.712837  304371 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:01:05.718837  304371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:01:05.718870  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:01:07.256269  304371 crio.go:462] duration metric: took 1.543466683s to copy over tarball
	I0214 22:01:07.256357  304371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:01:09.695876  304371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.439479772s)
	I0214 22:01:09.695918  304371 crio.go:469] duration metric: took 2.439614211s to extract the tarball
	I0214 22:01:09.695928  304371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:01:09.733290  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:09.780117  304371 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:01:09.780140  304371 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:01:09.780160  304371 kubeadm.go:926] updating node { 192.168.50.81 8443 v1.32.1 crio true true} ...
	I0214 22:01:09.780281  304371 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0214 22:01:09.780367  304371 ssh_runner.go:195] Run: crio config
	I0214 22:01:09.827891  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:09.827918  304371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:01:09.827940  304371 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-266997 NodeName:bridge-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:01:09.828092  304371 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:01:09.828156  304371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:01:09.837899  304371 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:01:09.837957  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:01:09.847189  304371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0214 22:01:09.863880  304371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:01:09.881813  304371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0214 22:01:09.898828  304371 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0214 22:01:09.902526  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:09.914292  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:10.040048  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:10.057372  304371 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997 for IP: 192.168.50.81
	I0214 22:01:10.057391  304371 certs.go:194] generating shared ca certs ...
	I0214 22:01:10.057407  304371 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.057580  304371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:01:10.057639  304371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:01:10.057653  304371 certs.go:256] generating profile certs ...
	I0214 22:01:10.057737  304371 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key
	I0214 22:01:10.057770  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt with IP's: []
	I0214 22:01:10.492985  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt ...
	I0214 22:01:10.493014  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: {Name:mk0e9a544ab62bf3bac0aeef07e33db8d1284119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493211  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key ...
	I0214 22:01:10.493229  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key: {Name:mk822ad23de6909e3dcaa3a4b87a06fbdfba8176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493342  304371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201
	I0214 22:01:10.493362  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.81]
	I0214 22:01:10.673628  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 ...
	I0214 22:01:10.673651  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201: {Name:mka33ef1d0779dee85a1340cd519c438b531f8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673787  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 ...
	I0214 22:01:10.673801  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201: {Name:mk2bcfa59be0eef44107f0d874f0a177271d56dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673881  304371 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt
	I0214 22:01:10.673969  304371 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key
	I0214 22:01:10.674034  304371 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key
	I0214 22:01:10.674051  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt with IP's: []
	I0214 22:01:10.815875  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt ...
	I0214 22:01:10.815900  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt: {Name:mk07fc7632bf05ef6abf8667a18602d64842bf54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816040  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key ...
	I0214 22:01:10.816054  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key: {Name:mk49f50231c8caf0067f42cee0eef760808a4f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816226  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:01:10.816268  304371 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:01:10.816279  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:01:10.816311  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:01:10.816343  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:01:10.816367  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:01:10.816410  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:10.817057  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:01:10.849496  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:01:10.873071  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:01:10.898240  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:01:10.921216  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:01:10.944392  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:01:10.968476  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:01:10.994710  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 22:01:11.019089  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:01:11.041841  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:01:11.064672  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:01:11.087698  304371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:01:11.105733  304371 ssh_runner.go:195] Run: openssl version
	I0214 22:01:11.113022  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:01:11.124173  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128829  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128877  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.134956  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:01:11.145646  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:01:11.156620  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.160984  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.161023  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.166639  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:01:11.177621  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:01:11.189431  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193866  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193907  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.199670  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:01:11.210845  304371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:01:11.214693  304371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:01:11.214742  304371 kubeadm.go:392] StartCluster: {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:01:11.214826  304371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:01:11.214862  304371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:01:11.258711  304371 cri.go:89] found id: ""
	I0214 22:01:11.258765  304371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:01:11.269032  304371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:01:11.279047  304371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:01:11.288803  304371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:01:11.288822  304371 kubeadm.go:157] found existing configuration files:
	
	I0214 22:01:11.288862  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:01:11.298148  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:01:11.298188  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:01:11.307741  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:01:11.316856  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:01:11.316903  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:01:11.326555  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.335896  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:01:11.335935  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.345669  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:01:11.355306  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:01:11.355357  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:01:11.364907  304371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:01:11.427252  304371 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:01:11.427326  304371 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:01:11.531552  304371 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:01:11.531691  304371 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:01:11.531851  304371 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:01:11.543555  304371 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0214 22:01:07.185994  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:08.565172  302662 node_ready.go:49] node "flannel-266997" is "Ready"
	I0214 22:01:08.565220  302662 node_ready.go:38] duration metric: took 6.004932024s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:08.565240  302662 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:08.565299  302662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.602874  302662 api_server.go:72] duration metric: took 6.771445737s to wait for apiserver process to appear ...
	I0214 22:01:08.602902  302662 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:08.602925  302662 api_server.go:253] Checking apiserver healthz at https://192.168.61.227:8443/healthz ...
	I0214 22:01:08.611745  302662 api_server.go:279] https://192.168.61.227:8443/healthz returned 200:
	ok
	I0214 22:01:08.612774  302662 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:08.612800  302662 api_server.go:131] duration metric: took 9.890538ms to wait for apiserver health ...
	I0214 22:01:08.612810  302662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:08.617075  302662 system_pods.go:59] 7 kube-system pods found
	I0214 22:01:08.617117  302662 system_pods.go:61] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.617131  302662 system_pods.go:61] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.617140  302662 system_pods.go:61] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.617151  302662 system_pods.go:61] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.617162  302662 system_pods.go:61] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.617176  302662 system_pods.go:61] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.617187  302662 system_pods.go:61] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.617199  302662 system_pods.go:74] duration metric: took 4.381701ms to wait for pod list to return data ...
	I0214 22:01:08.617213  302662 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:08.620515  302662 default_sa.go:45] found service account: "default"
	I0214 22:01:08.620531  302662 default_sa.go:55] duration metric: took 3.308722ms for default service account to be created ...
	I0214 22:01:08.620537  302662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:08.628163  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.628196  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.628205  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.628217  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.628232  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.628242  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.628250  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.628261  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.628286  302662 retry.go:31] will retry after 229.157349ms: missing components: kube-dns
	I0214 22:01:08.862237  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.862283  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.862293  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.862304  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.862315  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.862322  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.862330  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.862346  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.862370  302662 retry.go:31] will retry after 313.437713ms: missing components: kube-dns
	I0214 22:01:09.180643  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.180698  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.180709  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.180720  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.180732  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.180741  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.180751  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.180762  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.180785  302662 retry.go:31] will retry after 300.968731ms: missing components: kube-dns
	I0214 22:01:09.485817  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.485866  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.485876  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.485888  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.485897  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.485903  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.485914  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.485919  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.485947  302662 retry.go:31] will retry after 439.51358ms: missing components: kube-dns
	I0214 22:01:09.929653  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.929691  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.929699  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.929711  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.929724  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.929734  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.929747  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.929753  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.929778  302662 retry.go:31] will retry after 485.567052ms: missing components: kube-dns
	I0214 22:01:10.418771  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:10.418804  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:10.418813  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:10.418823  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:10.418833  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:10.418840  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:10.418848  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:10.418856  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:10.418873  302662 retry.go:31] will retry after 756.594325ms: missing components: kube-dns
	I0214 22:01:11.179962  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:11.179995  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:11.180004  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:11.180012  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:11.180022  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:11.180032  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:11.180043  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:11.180052  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:11.180085  302662 retry.go:31] will retry after 1.009789241s: missing components: kube-dns
	I0214 22:01:08.494769  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.514374  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:08.514458  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:08.561822  296043 cri.go:89] found id: ""
	I0214 22:01:08.561850  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.561859  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:08.561865  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:08.561912  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:08.602005  296043 cri.go:89] found id: ""
	I0214 22:01:08.602038  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.602051  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:08.602059  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:08.602136  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:08.642584  296043 cri.go:89] found id: ""
	I0214 22:01:08.642612  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.642636  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:08.642647  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:08.642725  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:08.677455  296043 cri.go:89] found id: ""
	I0214 22:01:08.677490  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.677506  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:08.677514  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:08.677579  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:08.723982  296043 cri.go:89] found id: ""
	I0214 22:01:08.724032  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.724046  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:08.724056  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:08.724129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:08.775467  296043 cri.go:89] found id: ""
	I0214 22:01:08.775503  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.775516  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:08.775525  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:08.775587  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:08.820143  296043 cri.go:89] found id: ""
	I0214 22:01:08.820187  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.820209  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:08.820218  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:08.820289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:08.855406  296043 cri.go:89] found id: ""
	I0214 22:01:08.855437  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.855448  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:08.855460  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:08.855476  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:08.914025  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:08.914052  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:08.927679  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:08.927708  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:09.029673  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:09.029699  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:09.029717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:09.113311  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:09.113358  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:11.659812  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:11.673901  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:11.673974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:11.710824  296043 cri.go:89] found id: ""
	I0214 22:01:11.710856  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.710868  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:11.710877  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:11.710939  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:11.749955  296043 cri.go:89] found id: ""
	I0214 22:01:11.749996  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.750009  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:11.750034  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:11.750109  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:11.784268  296043 cri.go:89] found id: ""
	I0214 22:01:11.784296  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.784308  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:11.784317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:11.784381  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:11.565511  304371 out.go:235]   - Generating certificates and keys ...
	I0214 22:01:11.565641  304371 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:01:11.565736  304371 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:01:11.597156  304371 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:01:11.777564  304371 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:01:12.000290  304371 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:01:12.274579  304371 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:01:12.340720  304371 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:01:12.341077  304371 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.592390  304371 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:01:12.592731  304371 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.789172  304371 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:01:12.860794  304371 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:01:12.958408  304371 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:01:12.958673  304371 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:01:13.132122  304371 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:01:13.373236  304371 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:01:13.504795  304371 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:01:13.776085  304371 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:01:14.088313  304371 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:01:14.089020  304371 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:01:14.093447  304371 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:01:14.095224  304371 out.go:235]   - Booting up control plane ...
	I0214 22:01:14.095351  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:01:14.095464  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:01:14.095532  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:01:14.111383  304371 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:01:14.118029  304371 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:01:14.118117  304371 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:01:14.266373  304371 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:01:14.266491  304371 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:01:14.767156  304371 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.155046ms
	I0214 22:01:14.767269  304371 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:01:12.399215  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:12.399250  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:12.399257  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:12.399265  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:12.399271  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:12.399279  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:12.399285  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:12.399296  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:12.399322  302662 retry.go:31] will retry after 1.435229105s: missing components: kube-dns
	I0214 22:01:13.838510  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:13.838553  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:13.838563  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:13.838572  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:13.838579  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:13.838584  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:13.838590  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:13.838599  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:13.838619  302662 retry.go:31] will retry after 1.229976943s: missing components: kube-dns
	I0214 22:01:15.072944  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:15.072987  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:15.072997  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:15.073007  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:15.073017  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:15.073024  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:15.073034  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:15.073042  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:15.073077  302662 retry.go:31] will retry after 1.417685153s: missing components: kube-dns
	I0214 22:01:16.494415  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:16.494450  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:16.494456  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:16.494463  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:16.494467  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:16.494471  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:16.494475  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:16.494478  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:16.494495  302662 retry.go:31] will retry after 2.360792167s: missing components: kube-dns
	I0214 22:01:11.822362  296043 cri.go:89] found id: ""
	I0214 22:01:11.822387  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.822395  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:11.822401  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:11.822462  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:11.860753  296043 cri.go:89] found id: ""
	I0214 22:01:11.860778  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.860786  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:11.860791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:11.860833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:11.901670  296043 cri.go:89] found id: ""
	I0214 22:01:11.901697  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.901709  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:11.901717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:11.901779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:11.939194  296043 cri.go:89] found id: ""
	I0214 22:01:11.939220  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.939230  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:11.939236  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:11.939289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:11.973819  296043 cri.go:89] found id: ""
	I0214 22:01:11.973846  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.973857  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:11.973869  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:11.973882  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:12.052290  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:12.052321  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:12.099732  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:12.099775  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:12.163962  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:12.163994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:12.181579  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:12.181625  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:12.272639  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:14.774322  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:14.787244  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:14.787299  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:14.820977  296043 cri.go:89] found id: ""
	I0214 22:01:14.821011  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.821024  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:14.821034  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:14.821099  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:14.852858  296043 cri.go:89] found id: ""
	I0214 22:01:14.852879  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.852888  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:14.852893  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:14.852947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:14.896441  296043 cri.go:89] found id: ""
	I0214 22:01:14.896464  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.896475  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:14.896483  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:14.896535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:14.930673  296043 cri.go:89] found id: ""
	I0214 22:01:14.930700  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.930712  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:14.930719  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:14.930776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:14.972676  296043 cri.go:89] found id: ""
	I0214 22:01:14.972708  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.972721  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:14.972729  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:14.972797  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:15.009271  296043 cri.go:89] found id: ""
	I0214 22:01:15.009303  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.009314  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:15.009323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:15.009406  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:15.045975  296043 cri.go:89] found id: ""
	I0214 22:01:15.046007  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.046021  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:15.046029  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:15.046102  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:15.084924  296043 cri.go:89] found id: ""
	I0214 22:01:15.084956  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.084967  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:15.084980  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:15.084995  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:15.143553  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:15.143587  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:15.158649  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:15.158687  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:15.235319  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:15.235343  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:15.235363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:15.324951  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:15.324990  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:19.266915  304371 kubeadm.go:310] [api-check] The API server is healthy after 4.501226967s
	I0214 22:01:19.286682  304371 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:01:19.300140  304371 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:01:19.320686  304371 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:01:19.320946  304371 kubeadm.go:310] [mark-control-plane] Marking the node bridge-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:01:19.338179  304371 kubeadm.go:310] [bootstrap-token] Using token: 4eaob3.8jnji5hz23dblskn
	I0214 22:01:19.339524  304371 out.go:235]   - Configuring RBAC rules ...
	I0214 22:01:19.339671  304371 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:01:19.345535  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:01:19.356239  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:01:19.363770  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:01:19.366981  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:01:19.371513  304371 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:01:19.672166  304371 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:01:20.099981  304371 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:01:20.669741  304371 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:01:20.671058  304371 kubeadm.go:310] 
	I0214 22:01:20.671186  304371 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:01:20.671210  304371 kubeadm.go:310] 
	I0214 22:01:20.671373  304371 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:01:20.671393  304371 kubeadm.go:310] 
	I0214 22:01:20.671428  304371 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:01:20.671511  304371 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:01:20.671588  304371 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:01:20.671598  304371 kubeadm.go:310] 
	I0214 22:01:20.671681  304371 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:01:20.671694  304371 kubeadm.go:310] 
	I0214 22:01:20.671769  304371 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:01:20.671784  304371 kubeadm.go:310] 
	I0214 22:01:20.671862  304371 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:01:20.671971  304371 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:01:20.672051  304371 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:01:20.672059  304371 kubeadm.go:310] 
	I0214 22:01:20.672173  304371 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:01:20.672270  304371 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:01:20.672278  304371 kubeadm.go:310] 
	I0214 22:01:20.672403  304371 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.672552  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:01:20.672586  304371 kubeadm.go:310] 	--control-plane 
	I0214 22:01:20.672596  304371 kubeadm.go:310] 
	I0214 22:01:20.672722  304371 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:01:20.672757  304371 kubeadm.go:310] 
	I0214 22:01:20.672884  304371 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.673034  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:01:20.673551  304371 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:01:20.673583  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:20.674803  304371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 22:01:18.859941  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:18.859975  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:18.859981  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:18.859987  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:18.859991  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:18.859996  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:18.860000  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:18.860004  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:18.860019  302662 retry.go:31] will retry after 2.716114002s: missing components: kube-dns
	I0214 22:01:17.869522  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:17.886022  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:17.886114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:17.926259  296043 cri.go:89] found id: ""
	I0214 22:01:17.926287  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.926296  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:17.926302  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:17.926358  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:17.989648  296043 cri.go:89] found id: ""
	I0214 22:01:17.989675  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.989683  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:17.989689  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:17.989744  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:18.041262  296043 cri.go:89] found id: ""
	I0214 22:01:18.041295  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.041307  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:18.041315  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:18.041380  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:18.080028  296043 cri.go:89] found id: ""
	I0214 22:01:18.080059  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.080069  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:18.080075  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:18.080134  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:18.116135  296043 cri.go:89] found id: ""
	I0214 22:01:18.116163  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.116172  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:18.116179  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:18.116239  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:18.148268  296043 cri.go:89] found id: ""
	I0214 22:01:18.148302  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.148315  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:18.148323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:18.148399  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:18.180352  296043 cri.go:89] found id: ""
	I0214 22:01:18.180378  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.180388  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:18.180394  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:18.180438  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:18.211513  296043 cri.go:89] found id: ""
	I0214 22:01:18.211534  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.211541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:18.211551  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:18.211562  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:18.260797  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:18.260831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:18.273477  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:18.273503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:18.340163  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:18.340182  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:18.340193  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:18.413927  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:18.413950  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:20.952238  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:20.964925  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:20.964984  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:21.000265  296043 cri.go:89] found id: ""
	I0214 22:01:21.000295  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.000306  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:21.000314  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:21.000376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:21.042754  296043 cri.go:89] found id: ""
	I0214 22:01:21.042780  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.042790  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:21.042798  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:21.042862  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:21.078636  296043 cri.go:89] found id: ""
	I0214 22:01:21.078664  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.078676  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:21.078684  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:21.078747  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:21.112023  296043 cri.go:89] found id: ""
	I0214 22:01:21.112050  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.112058  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:21.112067  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:21.112129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:21.147419  296043 cri.go:89] found id: ""
	I0214 22:01:21.147451  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.147462  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:21.147470  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:21.147541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:21.180151  296043 cri.go:89] found id: ""
	I0214 22:01:21.180191  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.180201  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:21.180209  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:21.180271  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:21.215007  296043 cri.go:89] found id: ""
	I0214 22:01:21.215037  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.215049  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:21.215057  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:21.215122  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:21.247912  296043 cri.go:89] found id: ""
	I0214 22:01:21.247953  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.247964  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:21.247976  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:21.247992  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:21.300392  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:21.300429  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:21.313583  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:21.313604  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:21.381863  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:21.381888  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:21.381902  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:21.460562  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:21.460591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:21.580732  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:21.580767  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Running
	I0214 22:01:21.580773  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:21.580777  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:21.580781  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:21.580785  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:21.580789  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:21.580792  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:21.580800  302662 system_pods.go:126] duration metric: took 12.960258845s to wait for k8s-apps to be running ...
	I0214 22:01:21.580808  302662 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:21.580852  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:21.596764  302662 system_svc.go:56] duration metric: took 15.934258ms WaitForService to wait for kubelet
	I0214 22:01:21.596793  302662 kubeadm.go:578] duration metric: took 19.765370857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:21.596814  302662 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:21.601648  302662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:21.601680  302662 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:21.601700  302662 node_conditions.go:105] duration metric: took 4.879566ms to run NodePressure ...
	I0214 22:01:21.601715  302662 start.go:241] waiting for startup goroutines ...
	I0214 22:01:21.601731  302662 start.go:246] waiting for cluster config update ...
	I0214 22:01:21.601749  302662 start.go:255] writing updated cluster config ...
	I0214 22:01:21.602045  302662 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:21.607012  302662 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:21.610715  302662 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.619683  302662 pod_ready.go:94] pod "coredns-668d6bf9bc-vlb9g" is "Ready"
	I0214 22:01:21.619715  302662 pod_ready.go:86] duration metric: took 8.975726ms for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.621747  302662 pod_ready.go:83] waiting for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.625095  302662 pod_ready.go:94] pod "etcd-flannel-266997" is "Ready"
	I0214 22:01:21.625112  302662 pod_ready.go:86] duration metric: took 3.349739ms for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.626839  302662 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.630189  302662 pod_ready.go:94] pod "kube-apiserver-flannel-266997" is "Ready"
	I0214 22:01:21.630205  302662 pod_ready.go:86] duration metric: took 3.350537ms for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.631966  302662 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.010234  302662 pod_ready.go:94] pod "kube-controller-manager-flannel-266997" is "Ready"
	I0214 22:01:22.010258  302662 pod_ready.go:86] duration metric: took 378.271702ms for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.210925  302662 pod_ready.go:83] waiting for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.610516  302662 pod_ready.go:94] pod "kube-proxy-lnlt5" is "Ready"
	I0214 22:01:22.610544  302662 pod_ready.go:86] duration metric: took 399.590168ms for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.810190  302662 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210781  302662 pod_ready.go:94] pod "kube-scheduler-flannel-266997" is "Ready"
	I0214 22:01:23.210809  302662 pod_ready.go:86] duration metric: took 400.595935ms for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210825  302662 pod_ready.go:40] duration metric: took 1.603788898s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:23.254724  302662 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:23.256280  302662 out.go:177] * Done! kubectl is now configured to use "flannel-266997" cluster and "default" namespace by default
	I0214 22:01:20.675853  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 22:01:20.687674  304371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0214 22:01:20.710977  304371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:01:20.711051  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:20.711136  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-266997 minikube.k8s.io/updated_at=2025_02_14T22_01_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=bridge-266997 minikube.k8s.io/primary=true
	I0214 22:01:20.857437  304371 ops.go:34] apiserver oom_adj: -16
	I0214 22:01:20.857573  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.357978  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.858196  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.357909  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.858323  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.358263  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.858483  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.358410  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.857672  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.358214  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.477742  304371 kubeadm.go:1105] duration metric: took 4.766743198s to wait for elevateKubeSystemPrivileges
	I0214 22:01:25.477787  304371 kubeadm.go:394] duration metric: took 14.263049181s to StartCluster
	I0214 22:01:25.477813  304371 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.477894  304371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:25.479312  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.479566  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:25.479594  304371 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:25.479566  304371 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:25.479695  304371 addons.go:69] Setting default-storageclass=true in profile "bridge-266997"
	I0214 22:01:25.479721  304371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-266997"
	I0214 22:01:25.479683  304371 addons.go:69] Setting storage-provisioner=true in profile "bridge-266997"
	I0214 22:01:25.479825  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:25.479828  304371 addons.go:238] Setting addon storage-provisioner=true in "bridge-266997"
	I0214 22:01:25.479933  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.480344  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480370  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480383  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.480400  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.481183  304371 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:25.482440  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:25.495953  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42079
	I0214 22:01:25.495973  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0214 22:01:25.496360  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496536  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496851  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.496873  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497082  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.497104  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497237  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.497486  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.497490  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.498041  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.498075  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.500794  304371 addons.go:238] Setting addon default-storageclass=true in "bridge-266997"
	I0214 22:01:25.500829  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.501072  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.501096  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.512606  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0214 22:01:25.512964  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.513385  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.513407  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.513770  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.513947  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.515505  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.517101  304371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:25.518333  304371 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.518354  304371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:25.518373  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.520011  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0214 22:01:25.520422  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.520847  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.520869  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.521183  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.521437  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.521710  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.521753  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.521881  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.521906  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.522179  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.522387  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.522543  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.522708  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.535515  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0214 22:01:25.535896  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.536315  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.536343  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.536695  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.536861  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.538765  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.538948  304371 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:25.538962  304371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:25.538976  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.541815  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542297  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.542316  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542488  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.542694  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.542878  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.543023  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.709288  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:25.709340  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:25.818938  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.883618  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:26.231097  304371 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:26.232118  304371 node_ready.go:35] waiting up to 15m0s for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244261  304371 node_ready.go:49] node "bridge-266997" is "Ready"
	I0214 22:01:26.244293  304371 node_ready.go:38] duration metric: took 12.148864ms for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244325  304371 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:26.244387  304371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:26.454003  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454033  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454062  304371 api_server.go:72] duration metric: took 974.324958ms to wait for apiserver process to appear ...
	I0214 22:01:26.454104  304371 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:26.454137  304371 api_server.go:253] Checking apiserver healthz at https://192.168.50.81:8443/healthz ...
	I0214 22:01:26.454282  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454299  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454449  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454476  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454486  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454495  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454560  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454577  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454580  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.454586  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454600  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454869  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454887  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454929  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.457012  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.457107  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.457041  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.464354  304371 api_server.go:279] https://192.168.50.81:8443/healthz returned 200:
	ok
	I0214 22:01:26.465264  304371 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:26.465285  304371 api_server.go:131] duration metric: took 11.170116ms to wait for apiserver health ...
	I0214 22:01:26.465296  304371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:26.471233  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.471249  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.471450  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.471473  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.471853  304371 system_pods.go:59] 8 kube-system pods found
	I0214 22:01:26.471889  304371 system_pods.go:61] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471903  304371 system_pods.go:61] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471917  304371 system_pods.go:61] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.471930  304371 system_pods.go:61] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.471941  304371 system_pods.go:61] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.471957  304371 system_pods.go:61] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.471966  304371 system_pods.go:61] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.471979  304371 system_pods.go:61] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending
	I0214 22:01:26.471988  304371 system_pods.go:74] duration metric: took 6.684999ms to wait for pod list to return data ...
	I0214 22:01:26.472001  304371 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:26.472806  304371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:24.002770  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:24.015631  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:24.015700  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:24.051601  296043 cri.go:89] found id: ""
	I0214 22:01:24.051637  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.051649  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:24.051657  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:24.051710  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:24.084938  296043 cri.go:89] found id: ""
	I0214 22:01:24.084963  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.084971  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:24.084977  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:24.085019  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:24.118982  296043 cri.go:89] found id: ""
	I0214 22:01:24.119012  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.119023  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:24.119030  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:24.119091  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:24.150809  296043 cri.go:89] found id: ""
	I0214 22:01:24.150838  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.150849  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:24.150857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:24.150927  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:24.180499  296043 cri.go:89] found id: ""
	I0214 22:01:24.180527  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.180538  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:24.180546  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:24.180613  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:24.214503  296043 cri.go:89] found id: ""
	I0214 22:01:24.214531  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.214542  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:24.214550  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:24.214616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:24.250992  296043 cri.go:89] found id: ""
	I0214 22:01:24.251018  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.251026  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:24.251032  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:24.251090  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:24.287791  296043 cri.go:89] found id: ""
	I0214 22:01:24.287816  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.287824  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:24.287839  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:24.287854  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:24.324499  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:24.324533  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:24.373673  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:24.373700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:24.387527  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:24.387558  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:24.464362  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:24.464394  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:24.464409  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:26.474033  304371 addons.go:514] duration metric: took 994.441902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:26.476260  304371 default_sa.go:45] found service account: "default"
	I0214 22:01:26.476283  304371 default_sa.go:55] duration metric: took 4.273083ms for default service account to be created ...
	I0214 22:01:26.476293  304371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:26.480354  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.480386  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480397  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480410  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.480419  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.480429  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.480435  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.480445  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.480457  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.480479  304371 retry.go:31] will retry after 268.412371ms: missing components: kube-dns
	I0214 22:01:26.734480  304371 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-266997" context rescaled to 1 replicas
	I0214 22:01:26.752596  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.752625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752632  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752639  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.752645  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.752649  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.752654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.752663  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.752668  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.752683  304371 retry.go:31] will retry after 253.744271ms: missing components: kube-dns
	I0214 22:01:27.010128  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.010160  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010169  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010176  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.010182  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.010187  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.010190  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.010195  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.010200  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:27.010215  304371 retry.go:31] will retry after 373.755847ms: missing components: kube-dns
	I0214 22:01:27.387928  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.387976  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.387988  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.388001  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.388015  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.388022  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.388031  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.388040  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.388048  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.388073  304371 retry.go:31] will retry after 449.518817ms: missing components: kube-dns
	I0214 22:01:27.841591  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.841625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841633  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841640  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.841646  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.841650  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.841654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.841661  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.841664  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.841680  304371 retry.go:31] will retry after 522.702646ms: missing components: kube-dns
	I0214 22:01:28.368689  304371 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:28.368725  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:28.368733  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:28.368741  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:28.368746  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:28.368753  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:28.368761  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:28.368765  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:28.368774  304371 system_pods.go:126] duration metric: took 1.892474517s to wait for k8s-apps to be running ...
	I0214 22:01:28.368785  304371 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:28.368830  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:28.383657  304371 system_svc.go:56] duration metric: took 14.862939ms WaitForService to wait for kubelet
	I0214 22:01:28.383685  304371 kubeadm.go:578] duration metric: took 2.903970849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:28.383703  304371 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:28.387139  304371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:28.387163  304371 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:28.387176  304371 node_conditions.go:105] duration metric: took 3.468187ms to run NodePressure ...
	I0214 22:01:28.387187  304371 start.go:241] waiting for startup goroutines ...
	I0214 22:01:28.387200  304371 start.go:246] waiting for cluster config update ...
	I0214 22:01:28.387215  304371 start.go:255] writing updated cluster config ...
	I0214 22:01:28.387551  304371 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:28.391627  304371 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:28.395108  304371 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:27.040249  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:27.052990  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:27.053055  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:27.092109  296043 cri.go:89] found id: ""
	I0214 22:01:27.092138  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.092150  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:27.092158  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:27.092219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:27.128290  296043 cri.go:89] found id: ""
	I0214 22:01:27.128323  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.128336  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:27.128344  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:27.128413  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:27.166086  296043 cri.go:89] found id: ""
	I0214 22:01:27.166113  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.166121  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:27.166127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:27.166174  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:27.198082  296043 cri.go:89] found id: ""
	I0214 22:01:27.198114  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.198126  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:27.198133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:27.198196  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:27.229133  296043 cri.go:89] found id: ""
	I0214 22:01:27.229167  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.229182  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:27.229190  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:27.229253  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:27.267454  296043 cri.go:89] found id: ""
	I0214 22:01:27.267483  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.267495  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:27.267504  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:27.267570  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:27.306235  296043 cri.go:89] found id: ""
	I0214 22:01:27.306265  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.306277  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:27.306289  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:27.306368  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:27.337862  296043 cri.go:89] found id: ""
	I0214 22:01:27.337894  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.337905  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:27.337916  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:27.337928  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:27.384978  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:27.385007  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:27.398968  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:27.398999  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:27.468335  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:27.468363  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:27.468379  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:27.549329  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:27.549363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:30.097135  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:30.110653  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:30.110740  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:30.148484  296043 cri.go:89] found id: ""
	I0214 22:01:30.148518  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.148530  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:30.148538  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:30.148611  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:30.183761  296043 cri.go:89] found id: ""
	I0214 22:01:30.183791  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.183802  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:30.183809  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:30.183866  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:30.216232  296043 cri.go:89] found id: ""
	I0214 22:01:30.216260  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.216271  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:30.216278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:30.216346  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:30.248173  296043 cri.go:89] found id: ""
	I0214 22:01:30.248199  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.248210  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:30.248217  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:30.248281  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:30.283288  296043 cri.go:89] found id: ""
	I0214 22:01:30.283318  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.283329  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:30.283350  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:30.283402  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:30.324270  296043 cri.go:89] found id: ""
	I0214 22:01:30.324297  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.324308  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:30.324317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:30.324373  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:30.360122  296043 cri.go:89] found id: ""
	I0214 22:01:30.360146  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.360154  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:30.360159  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:30.360207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:30.394546  296043 cri.go:89] found id: ""
	I0214 22:01:30.394571  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.394580  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:30.394594  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:30.394613  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:30.449231  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:30.449258  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:30.463475  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:30.463499  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:30.536719  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:30.536746  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:30.536762  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:30.619446  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:30.619484  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:01:30.438589  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:32.924767  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:33.159018  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:33.176759  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:33.176842  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:33.216502  296043 cri.go:89] found id: ""
	I0214 22:01:33.216527  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.216536  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:33.216542  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:33.216597  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:33.254772  296043 cri.go:89] found id: ""
	I0214 22:01:33.254799  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.254810  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:33.254817  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:33.254878  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:33.287687  296043 cri.go:89] found id: ""
	I0214 22:01:33.287713  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.287722  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:33.287728  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:33.287790  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:33.319969  296043 cri.go:89] found id: ""
	I0214 22:01:33.319990  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.319997  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:33.320002  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:33.320046  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:33.352720  296043 cri.go:89] found id: ""
	I0214 22:01:33.352740  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.352747  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:33.352752  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:33.352807  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:33.390638  296043 cri.go:89] found id: ""
	I0214 22:01:33.390662  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.390671  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:33.390678  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:33.390730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:33.425935  296043 cri.go:89] found id: ""
	I0214 22:01:33.425954  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.425962  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:33.425967  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:33.426012  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:33.459671  296043 cri.go:89] found id: ""
	I0214 22:01:33.459695  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.459705  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:33.459716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:33.459730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:33.535469  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:33.535493  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:33.570473  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:33.570501  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:33.619720  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:33.619745  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:33.631829  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:33.631850  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:33.701637  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.202577  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:36.216700  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:36.216761  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:36.250764  296043 cri.go:89] found id: ""
	I0214 22:01:36.250789  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.250798  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:36.250804  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:36.250853  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:36.284811  296043 cri.go:89] found id: ""
	I0214 22:01:36.284838  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.284850  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:36.284857  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:36.284916  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:36.321197  296043 cri.go:89] found id: ""
	I0214 22:01:36.321219  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.321227  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:36.321235  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:36.321277  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:36.354869  296043 cri.go:89] found id: ""
	I0214 22:01:36.354896  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.354907  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:36.354915  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:36.354967  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:36.393688  296043 cri.go:89] found id: ""
	I0214 22:01:36.393712  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.393722  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:36.393730  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:36.393781  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:36.427985  296043 cri.go:89] found id: ""
	I0214 22:01:36.428006  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.428015  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:36.428023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:36.428076  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:36.458367  296043 cri.go:89] found id: ""
	I0214 22:01:36.458386  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.458393  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:36.458398  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:36.458446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:36.489038  296043 cri.go:89] found id: ""
	I0214 22:01:36.489061  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.489069  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:36.489080  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:36.489093  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:36.526950  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:36.526971  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:36.577258  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:36.577293  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:36.589545  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:36.589567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:36.658634  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.658656  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:36.658674  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0214 22:01:35.400875  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:37.900278  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:38.401005  304371 pod_ready.go:94] pod "coredns-668d6bf9bc-m2ggw" is "Ready"
	I0214 22:01:38.401031  304371 pod_ready.go:86] duration metric: took 10.005896118s for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.403160  304371 pod_ready.go:83] waiting for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.407295  304371 pod_ready.go:94] pod "etcd-bridge-266997" is "Ready"
	I0214 22:01:38.407320  304371 pod_ready.go:86] duration metric: took 4.131989ms for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.409214  304371 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.413019  304371 pod_ready.go:94] pod "kube-apiserver-bridge-266997" is "Ready"
	I0214 22:01:38.413047  304371 pod_ready.go:86] duration metric: took 3.813497ms for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.414707  304371 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.598300  304371 pod_ready.go:94] pod "kube-controller-manager-bridge-266997" is "Ready"
	I0214 22:01:38.598321  304371 pod_ready.go:86] duration metric: took 183.594312ms for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.799339  304371 pod_ready.go:83] waiting for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.198982  304371 pod_ready.go:94] pod "kube-proxy-xdwmc" is "Ready"
	I0214 22:01:39.199006  304371 pod_ready.go:86] duration metric: took 399.648451ms for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.400069  304371 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800157  304371 pod_ready.go:94] pod "kube-scheduler-bridge-266997" is "Ready"
	I0214 22:01:39.800184  304371 pod_ready.go:86] duration metric: took 400.072932ms for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800195  304371 pod_ready.go:40] duration metric: took 11.408545307s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:39.844662  304371 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:39.846593  304371 out.go:177] * Done! kubectl is now configured to use "bridge-266997" cluster and "default" namespace by default
	I0214 22:01:39.231339  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:39.244717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:39.244765  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:39.277734  296043 cri.go:89] found id: ""
	I0214 22:01:39.277756  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.277766  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:39.277773  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:39.277836  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:39.309896  296043 cri.go:89] found id: ""
	I0214 22:01:39.309916  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.309923  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:39.309931  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:39.309979  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:39.342579  296043 cri.go:89] found id: ""
	I0214 22:01:39.342608  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.342619  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:39.342637  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:39.342686  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:39.378083  296043 cri.go:89] found id: ""
	I0214 22:01:39.378112  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.378124  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:39.378134  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:39.378192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:39.414803  296043 cri.go:89] found id: ""
	I0214 22:01:39.414828  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.414842  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:39.414850  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:39.414904  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:39.449659  296043 cri.go:89] found id: ""
	I0214 22:01:39.449690  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.449702  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:39.449711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:39.449778  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:39.486261  296043 cri.go:89] found id: ""
	I0214 22:01:39.486288  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.486300  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:39.486308  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:39.486371  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:39.518224  296043 cri.go:89] found id: ""
	I0214 22:01:39.518245  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.518253  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:39.518264  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:39.518277  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:39.598112  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:39.598145  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:39.634704  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:39.634727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:39.685193  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:39.685217  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:39.697332  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:39.697355  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:39.773514  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.273720  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:42.290415  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:42.290491  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:42.329509  296043 cri.go:89] found id: ""
	I0214 22:01:42.329539  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.329549  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:42.329556  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:42.329616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:42.366218  296043 cri.go:89] found id: ""
	I0214 22:01:42.366247  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.366259  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:42.366267  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:42.366324  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:42.404603  296043 cri.go:89] found id: ""
	I0214 22:01:42.404627  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.404634  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:42.404641  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:42.404691  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:42.437980  296043 cri.go:89] found id: ""
	I0214 22:01:42.438008  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.438017  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:42.438023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:42.438072  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:42.470475  296043 cri.go:89] found id: ""
	I0214 22:01:42.470505  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.470517  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:42.470526  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:42.470592  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:42.503557  296043 cri.go:89] found id: ""
	I0214 22:01:42.503593  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.503606  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:42.503614  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:42.503681  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:42.537499  296043 cri.go:89] found id: ""
	I0214 22:01:42.537549  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.537559  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:42.537568  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:42.537629  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:42.581710  296043 cri.go:89] found id: ""
	I0214 22:01:42.581740  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.581752  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:42.581765  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:42.581785  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:42.594891  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:42.594920  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:42.675186  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.675207  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:42.675221  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:42.762000  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:42.762033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:42.813591  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:42.813644  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.368276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:45.383477  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:45.383541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:45.419199  296043 cri.go:89] found id: ""
	I0214 22:01:45.419226  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.419235  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:45.419242  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:45.419286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:45.457708  296043 cri.go:89] found id: ""
	I0214 22:01:45.457740  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.457752  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:45.457761  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:45.457831  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:45.497110  296043 cri.go:89] found id: ""
	I0214 22:01:45.497138  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.497146  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:45.497154  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:45.497220  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:45.534294  296043 cri.go:89] found id: ""
	I0214 22:01:45.534318  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.534326  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:45.534333  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:45.534392  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:45.575462  296043 cri.go:89] found id: ""
	I0214 22:01:45.575492  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.575504  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:45.575513  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:45.575573  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:45.615590  296043 cri.go:89] found id: ""
	I0214 22:01:45.615620  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.615631  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:45.615639  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:45.615694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:45.655779  296043 cri.go:89] found id: ""
	I0214 22:01:45.655813  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.655826  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:45.655834  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:45.655903  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:45.691350  296043 cri.go:89] found id: ""
	I0214 22:01:45.691376  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.691386  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:45.691395  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:45.691407  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.749784  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:45.749833  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:45.764193  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:45.764225  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:45.836887  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:45.836914  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:45.836930  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:45.943944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:45.943974  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.486718  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:48.500667  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:48.500730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:48.539749  296043 cri.go:89] found id: ""
	I0214 22:01:48.539775  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.539785  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:48.539794  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:48.539846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:48.576675  296043 cri.go:89] found id: ""
	I0214 22:01:48.576703  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.576714  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:48.576723  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:48.576776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:48.608593  296043 cri.go:89] found id: ""
	I0214 22:01:48.608618  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.608627  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:48.608634  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:48.608684  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:48.644181  296043 cri.go:89] found id: ""
	I0214 22:01:48.644210  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.644221  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:48.644228  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:48.644280  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:48.681188  296043 cri.go:89] found id: ""
	I0214 22:01:48.681214  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.681224  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:48.681232  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:48.681286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:48.719817  296043 cri.go:89] found id: ""
	I0214 22:01:48.719847  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.719857  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:48.719865  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:48.719922  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:48.756080  296043 cri.go:89] found id: ""
	I0214 22:01:48.756107  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.756119  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:48.756127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:48.756188  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:48.796664  296043 cri.go:89] found id: ""
	I0214 22:01:48.796692  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.796703  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:48.796716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:48.796730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:48.877633  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:48.877660  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.924693  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:48.924726  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:48.980014  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:48.980045  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:48.993129  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:48.993153  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:49.067409  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:51.568106  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:51.583193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:51.583254  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:51.620026  296043 cri.go:89] found id: ""
	I0214 22:01:51.620050  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.620058  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:51.620063  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:51.620120  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:51.654068  296043 cri.go:89] found id: ""
	I0214 22:01:51.654103  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.654114  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:51.654122  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:51.654176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:51.689022  296043 cri.go:89] found id: ""
	I0214 22:01:51.689047  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.689055  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:51.689062  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:51.689118  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:51.725479  296043 cri.go:89] found id: ""
	I0214 22:01:51.725503  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.725513  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:51.725524  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:51.725576  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:51.761617  296043 cri.go:89] found id: ""
	I0214 22:01:51.761644  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.761653  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:51.761660  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:51.761719  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:51.802942  296043 cri.go:89] found id: ""
	I0214 22:01:51.802963  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.802972  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:51.802979  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:51.803027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:51.843214  296043 cri.go:89] found id: ""
	I0214 22:01:51.843242  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.843252  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:51.843264  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:51.843316  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:51.910513  296043 cri.go:89] found id: ""
	I0214 22:01:51.910550  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.910562  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:51.910576  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:51.910594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:51.923639  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:51.923676  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:52.014337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:52.014366  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:52.014384  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:52.106586  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:52.106617  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:52.154349  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:52.154376  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:54.715843  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:54.729644  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:54.729694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:54.766181  296043 cri.go:89] found id: ""
	I0214 22:01:54.766200  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.766210  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:54.766216  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:54.766276  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:54.808010  296043 cri.go:89] found id: ""
	I0214 22:01:54.808039  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.808050  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:54.808064  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:54.808130  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:54.856672  296043 cri.go:89] found id: ""
	I0214 22:01:54.856693  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.856711  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:54.856717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:54.856762  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:54.906801  296043 cri.go:89] found id: ""
	I0214 22:01:54.906820  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.906827  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:54.906833  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:54.906873  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:54.951444  296043 cri.go:89] found id: ""
	I0214 22:01:54.951467  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.951477  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:54.951485  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:54.951539  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:54.993431  296043 cri.go:89] found id: ""
	I0214 22:01:54.993457  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.993468  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:54.993476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:54.993520  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:55.040664  296043 cri.go:89] found id: ""
	I0214 22:01:55.040714  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.040726  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:55.040735  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:55.040793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:55.080280  296043 cri.go:89] found id: ""
	I0214 22:01:55.080309  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.080317  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:55.080327  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:55.080342  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:55.141974  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:55.142012  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:55.159407  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:55.159436  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:55.238973  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:55.238998  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:55.239010  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:55.326876  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:55.326907  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:57.883816  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:57.898210  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:57.898270  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:57.933120  296043 cri.go:89] found id: ""
	I0214 22:01:57.933146  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.933155  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:57.933163  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:57.933219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:57.968047  296043 cri.go:89] found id: ""
	I0214 22:01:57.968072  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.968089  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:57.968096  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:57.968150  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:58.007167  296043 cri.go:89] found id: ""
	I0214 22:01:58.007194  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.007205  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:58.007213  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:58.007263  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:58.044221  296043 cri.go:89] found id: ""
	I0214 22:01:58.044249  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.044259  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:58.044270  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:58.044322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:58.079197  296043 cri.go:89] found id: ""
	I0214 22:01:58.079226  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.079237  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:58.079246  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:58.079308  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:58.115726  296043 cri.go:89] found id: ""
	I0214 22:01:58.115757  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.115768  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:58.115779  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:58.115833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:58.151192  296043 cri.go:89] found id: ""
	I0214 22:01:58.151218  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.151226  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:58.151231  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:58.151279  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:58.186512  296043 cri.go:89] found id: ""
	I0214 22:01:58.186531  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.186539  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:58.186548  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:58.186559  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:58.225500  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:58.225528  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:58.273842  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:58.273869  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:58.297373  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:58.297401  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:58.403111  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:58.403131  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:58.403155  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:00.996658  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:01.013323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:01.013388  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:01.054606  296043 cri.go:89] found id: ""
	I0214 22:02:01.054647  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.054659  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:01.054667  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:01.054729  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:01.091830  296043 cri.go:89] found id: ""
	I0214 22:02:01.091860  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.091870  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:01.091878  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:01.091933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:01.127100  296043 cri.go:89] found id: ""
	I0214 22:02:01.127126  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.127133  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:01.127139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:01.127176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:01.160268  296043 cri.go:89] found id: ""
	I0214 22:02:01.160291  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.160298  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:01.160304  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:01.160354  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:01.192244  296043 cri.go:89] found id: ""
	I0214 22:02:01.192277  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.192290  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:01.192301  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:01.192372  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:01.226746  296043 cri.go:89] found id: ""
	I0214 22:02:01.226777  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.226787  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:01.226797  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:01.226848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:01.264235  296043 cri.go:89] found id: ""
	I0214 22:02:01.264257  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.264266  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:01.264274  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:01.264325  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:01.299082  296043 cri.go:89] found id: ""
	I0214 22:02:01.299107  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.299119  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:01.299137  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:01.299152  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:01.374067  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:01.374087  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:01.374100  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:01.466814  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:01.466842  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:01.508566  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:01.508591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:01.565286  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:01.565318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.079276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:04.098100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:04.098168  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:04.148307  296043 cri.go:89] found id: ""
	I0214 22:02:04.148338  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.148347  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:04.148353  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:04.148401  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:04.182456  296043 cri.go:89] found id: ""
	I0214 22:02:04.182483  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.182493  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:04.182500  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:04.182548  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:04.222072  296043 cri.go:89] found id: ""
	I0214 22:02:04.222099  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.222107  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:04.222112  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:04.222155  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:04.255053  296043 cri.go:89] found id: ""
	I0214 22:02:04.255082  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.255092  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:04.255100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:04.255154  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:04.293951  296043 cri.go:89] found id: ""
	I0214 22:02:04.293982  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.293991  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:04.293998  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:04.294051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:04.334092  296043 cri.go:89] found id: ""
	I0214 22:02:04.334115  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.334123  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:04.334130  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:04.334179  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:04.366129  296043 cri.go:89] found id: ""
	I0214 22:02:04.366148  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.366160  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:04.366166  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:04.366207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:04.398508  296043 cri.go:89] found id: ""
	I0214 22:02:04.398532  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.398541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:04.398554  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:04.398567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:04.446518  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:04.446547  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.459347  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:04.459368  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:04.535181  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:04.535198  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:04.535212  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:04.608858  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:04.608891  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:07.150996  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:07.164414  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:07.164466  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:07.198549  296043 cri.go:89] found id: ""
	I0214 22:02:07.198571  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.198579  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:07.198585  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:07.198644  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:07.231429  296043 cri.go:89] found id: ""
	I0214 22:02:07.231454  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.231465  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:07.231472  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:07.231527  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:07.262244  296043 cri.go:89] found id: ""
	I0214 22:02:07.262266  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.262273  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:07.262278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:07.262322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:07.292654  296043 cri.go:89] found id: ""
	I0214 22:02:07.292670  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.292677  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:07.292686  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:07.292731  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:07.325893  296043 cri.go:89] found id: ""
	I0214 22:02:07.325911  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.325918  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:07.325923  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:07.325961  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:07.358776  296043 cri.go:89] found id: ""
	I0214 22:02:07.358799  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.358806  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:07.358811  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:07.358855  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:07.392029  296043 cri.go:89] found id: ""
	I0214 22:02:07.392052  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.392062  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:07.392073  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:07.392132  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:07.423080  296043 cri.go:89] found id: ""
	I0214 22:02:07.423105  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.423115  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:07.423128  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:07.423142  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:07.473625  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:07.473649  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:07.486487  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:07.486510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:07.550364  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:07.550387  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:07.550400  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:07.620727  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:07.620750  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:10.158575  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:10.171139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:10.171189  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:10.203796  296043 cri.go:89] found id: ""
	I0214 22:02:10.203825  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.203837  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:10.203847  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:10.203905  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:10.235261  296043 cri.go:89] found id: ""
	I0214 22:02:10.235279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.235287  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:10.235292  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:10.235331  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:10.267017  296043 cri.go:89] found id: ""
	I0214 22:02:10.267037  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.267044  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:10.267052  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:10.267110  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:10.298100  296043 cri.go:89] found id: ""
	I0214 22:02:10.298121  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.298127  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:10.298133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:10.298173  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:10.330163  296043 cri.go:89] found id: ""
	I0214 22:02:10.330189  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.330196  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:10.330205  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:10.330257  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:10.363253  296043 cri.go:89] found id: ""
	I0214 22:02:10.363279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.363287  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:10.363293  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:10.363345  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:10.393052  296043 cri.go:89] found id: ""
	I0214 22:02:10.393073  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.393081  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:10.393086  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:10.393124  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:10.423261  296043 cri.go:89] found id: ""
	I0214 22:02:10.423284  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.423292  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:10.423302  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:10.423314  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:10.474817  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:10.474839  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:10.487089  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:10.487117  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:10.552798  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:10.552818  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:10.552827  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:10.633678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:10.633700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:13.175779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:13.188862  296043 kubeadm.go:593] duration metric: took 4m4.534890262s to restartPrimaryControlPlane
	W0214 22:02:13.188929  296043 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0214 22:02:13.188953  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:02:14.903694  296043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.714713868s)
	I0214 22:02:14.903774  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:02:14.917520  296043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:02:14.927114  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:02:14.936531  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:02:14.936548  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:02:14.936593  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:02:14.945506  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:02:14.945543  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:02:14.954573  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:02:14.963268  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:02:14.963308  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:02:14.972385  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.981144  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:02:14.981190  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.990181  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:02:14.998739  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:02:14.998781  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:02:15.007880  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:02:15.079968  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:02:15.080063  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:02:15.227132  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:02:15.227264  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:02:15.227363  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:02:15.399613  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:02:15.401413  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:02:15.401514  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:02:15.401584  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:02:15.401699  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:02:15.401787  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:02:15.401887  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:02:15.403287  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:02:15.403395  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:02:15.403485  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:02:15.403584  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:02:15.403691  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:02:15.403760  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:02:15.403854  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:02:15.575946  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:02:15.646531  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:02:16.039563  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:02:16.210385  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:02:16.225322  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:02:16.226388  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:02:16.226445  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:02:16.354308  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:02:16.356102  296043 out.go:235]   - Booting up control plane ...
	I0214 22:02:16.356211  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:02:16.360283  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:02:16.361731  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:02:16.362515  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:02:16.373807  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:02:56.375481  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:02:56.376996  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:02:56.377215  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:01.377539  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:01.377722  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:11.378071  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:11.378255  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:31.379013  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:31.379253  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.380898  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:11.381134  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.381161  296043 kubeadm.go:310] 
	I0214 22:04:11.381223  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:04:11.381276  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:04:11.381287  296043 kubeadm.go:310] 
	I0214 22:04:11.381330  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:04:11.381386  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:04:11.381508  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:04:11.381517  296043 kubeadm.go:310] 
	I0214 22:04:11.381610  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:04:11.381661  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:04:11.381706  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:04:11.381713  296043 kubeadm.go:310] 
	I0214 22:04:11.381844  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:04:11.381962  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:04:11.381985  296043 kubeadm.go:310] 
	I0214 22:04:11.382159  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:04:11.382269  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:04:11.382378  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:04:11.382478  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:04:11.382488  296043 kubeadm.go:310] 
	I0214 22:04:11.383608  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:04:11.383712  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:04:11.383805  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 22:04:11.383962  296043 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 22:04:11.384029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:04:11.847932  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:04:11.862250  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:04:11.872076  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:04:11.872096  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:04:11.872141  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:04:11.881248  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:04:11.881299  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:04:11.890591  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:04:11.899561  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:04:11.899609  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:04:11.908818  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.917642  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:04:11.917688  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.926938  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:04:11.936007  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:04:11.936053  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:04:11.945314  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:04:12.015411  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:04:12.015466  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:04:12.151668  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:04:12.151844  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:04:12.151988  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:04:12.322327  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:04:12.324344  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:04:12.324451  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:04:12.324530  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:04:12.324659  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:04:12.324761  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:04:12.324855  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:04:12.324934  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:04:12.325109  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:04:12.325566  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:04:12.325866  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:04:12.326334  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:04:12.326391  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:04:12.326453  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:04:12.468450  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:04:12.741068  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:04:12.905628  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:04:13.075487  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:04:13.093105  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:04:13.093840  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:04:13.093897  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:04:13.225868  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:04:13.227602  296043 out.go:235]   - Booting up control plane ...
	I0214 22:04:13.227715  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:04:13.235626  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:04:13.238592  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:04:13.239495  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:04:13.246539  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:04:53.249274  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:04:53.249602  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:53.249764  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:58.250244  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:58.250486  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:08.251032  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:08.251247  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:28.253223  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:28.253527  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252450  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:06:08.252752  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252786  296043 kubeadm.go:310] 
	I0214 22:06:08.252841  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:06:08.252891  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:06:08.252909  296043 kubeadm.go:310] 
	I0214 22:06:08.252957  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:06:08.253010  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:06:08.253150  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:06:08.253160  296043 kubeadm.go:310] 
	I0214 22:06:08.253287  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:06:08.253332  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:06:08.253372  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:06:08.253403  296043 kubeadm.go:310] 
	I0214 22:06:08.253569  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:06:08.253692  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:06:08.253701  296043 kubeadm.go:310] 
	I0214 22:06:08.253861  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:06:08.253990  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:06:08.254095  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:06:08.254195  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:06:08.254206  296043 kubeadm.go:310] 
	I0214 22:06:08.254491  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:06:08.254637  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:06:08.254748  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 22:06:08.254848  296043 kubeadm.go:394] duration metric: took 7m59.662371118s to StartCluster
	I0214 22:06:08.254965  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:06:08.255027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:06:08.298673  296043 cri.go:89] found id: ""
	I0214 22:06:08.298694  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.298702  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:06:08.298709  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:06:08.298777  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:06:08.329697  296043 cri.go:89] found id: ""
	I0214 22:06:08.329717  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.329724  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:06:08.329729  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:06:08.329779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:06:08.360276  296043 cri.go:89] found id: ""
	I0214 22:06:08.360296  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.360304  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:06:08.360310  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:06:08.360370  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:06:08.391153  296043 cri.go:89] found id: ""
	I0214 22:06:08.391180  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.391188  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:06:08.391193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:06:08.391244  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:06:08.421880  296043 cri.go:89] found id: ""
	I0214 22:06:08.421907  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.421917  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:06:08.421924  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:06:08.421974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:06:08.453558  296043 cri.go:89] found id: ""
	I0214 22:06:08.453578  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.453587  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:06:08.453594  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:06:08.453641  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:06:08.495718  296043 cri.go:89] found id: ""
	I0214 22:06:08.495750  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.495761  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:06:08.495772  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:06:08.495845  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:06:08.542115  296043 cri.go:89] found id: ""
	I0214 22:06:08.542141  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.542152  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:06:08.542165  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:06:08.542180  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:06:08.605825  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:06:08.605851  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:06:08.621228  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:06:08.621251  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:06:08.696999  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:06:08.697025  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:06:08.697050  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:06:08.796690  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:06:08.796716  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:06:08.834010  296043 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 22:06:08.834068  296043 out.go:270] * 
	W0214 22:06:08.834153  296043 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.834166  296043 out.go:270] * 
	W0214 22:06:08.835011  296043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 22:06:08.838512  296043 out.go:201] 
	W0214 22:06:08.839577  296043 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.839631  296043 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 22:06:08.839655  296043 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 22:06:08.840885  296043 out.go:201] 
	
	
	==> CRI-O <==
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.915618680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739570769915604253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=639b9e7e-46ee-4f25-b3a6-e5318aa727a7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.916537168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2fc1e20-4ed0-46b9-beb5-e5cdfc0a8abf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.916602521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2fc1e20-4ed0-46b9-beb5-e5cdfc0a8abf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.916636330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2fc1e20-4ed0-46b9-beb5-e5cdfc0a8abf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.947793854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=184d4301-3d2b-4213-b5d8-0f112ce0a6bf name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.947862846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=184d4301-3d2b-4213-b5d8-0f112ce0a6bf name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.948763653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9367720c-be8e-432b-b5bd-5e8125c31007 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.949117046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739570769949099818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9367720c-be8e-432b-b5bd-5e8125c31007 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.949719544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a25e5174-e481-4b3f-9265-c98e547e94f8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.949781459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a25e5174-e481-4b3f-9265-c98e547e94f8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.949817822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a25e5174-e481-4b3f-9265-c98e547e94f8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.977818377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53937736-b490-40d6-b6cc-fb023514f879 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.977882412Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53937736-b490-40d6-b6cc-fb023514f879 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.978927622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c9571cc-9387-4445-beee-00cfe38c7c93 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.979337681Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739570769979321679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c9571cc-9387-4445-beee-00cfe38c7c93 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.979701978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fde7498-3812-4a2c-92be-125a74c085f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.979772939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fde7498-3812-4a2c-92be-125a74c085f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:09 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:09.979804394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3fde7498-3812-4a2c-92be-125a74c085f0 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.010453803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5816a57d-90c8-428d-885e-45befd4bb162 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.010524253Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5816a57d-90c8-428d-885e-45befd4bb162 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.011797827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fac02c00-dc9e-44f9-ad82-0e9597cb5757 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.012152835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739570770012130013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fac02c00-dc9e-44f9-ad82-0e9597cb5757 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.012731049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4828d83c-3e17-4df0-a102-1491fe68e7ac name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.012775393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4828d83c-3e17-4df0-a102-1491fe68e7ac name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:06:10 old-k8s-version-201745 crio[638]: time="2025-02-14 22:06:10.012803891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4828d83c-3e17-4df0-a102-1491fe68e7ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb14 21:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060243] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046957] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.427674] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.890736] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.894421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.931911] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.056852] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063369] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.207712] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.154341] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[Feb14 21:58] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.870486] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.069737] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.465278] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +9.377456] kauditd_printk_skb: 46 callbacks suppressed
	[Feb14 22:02] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Feb14 22:04] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.064085] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:06:10 up 8 min,  0 users,  load average: 0.06, 0.13, 0.09
	Linux old-k8s-version-201745 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0004e6460, 0xc000970a50, 0xc000970a50, 0x0, 0x0)
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000018540)
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: goroutine 163 [runnable]:
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b57e00, 0x1, 0x0, 0x0, 0x0, 0x0)
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0002be180, 0x0, 0x0)
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000018540)
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Feb 14 22:06:07 old-k8s-version-201745 kubelet[5490]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Feb 14 22:06:07 old-k8s-version-201745 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 22:06:07 old-k8s-version-201745 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 22:06:08 old-k8s-version-201745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 14 22:06:08 old-k8s-version-201745 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 22:06:08 old-k8s-version-201745 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 22:06:08 old-k8s-version-201745 kubelet[5536]: I0214 22:06:08.596647    5536 server.go:416] Version: v1.20.0
	Feb 14 22:06:08 old-k8s-version-201745 kubelet[5536]: I0214 22:06:08.596833    5536 server.go:837] Client rotation is on, will bootstrap in background
	Feb 14 22:06:08 old-k8s-version-201745 kubelet[5536]: I0214 22:06:08.599059    5536 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 22:06:08 old-k8s-version-201745 kubelet[5536]: W0214 22:06:08.600029    5536 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 14 22:06:08 old-k8s-version-201745 kubelet[5536]: I0214 22:06:08.600227    5536 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (235.926744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-201745" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (523.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:19.573970  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:23.275719  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.282158  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.293574  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.314840  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.356206  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.437818  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:23.599258  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:23.920560  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:24.562668  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:25.844862  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:28.406393  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:31.605012  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:33.528182  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:36.452890  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:40.309415  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.315738  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.327053  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.348446  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.389777  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.471391  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:40.632975  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:40.954488  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:41.596590  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:42.290171  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:42.878677  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:43.769987  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:45.440695  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:06:50.254520  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:50.562682  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:00.536167  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:00.804758  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:04.252023  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:21.286807  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:24.507508  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:45.214039  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:07:53.526453  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:08:00.199742  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:08:02.249003  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:08:04.860716  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:08:22.458598  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:08:27.901963  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:09:06.394540  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:09:07.135962  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:09:24.170343  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:09:34.096806  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:09:40.647967  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:01.320521  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:08.349197  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:09.665580  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:13.381806  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:37.368373  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:10:38.597718  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:11:06.300441  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:11:23.275076  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:11:40.309900  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:11:42.290762  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:11:50.978356  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:12:08.012525  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:13:00.200666  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:13:04.861136  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:14:06.394540  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:14:27.924643  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:14:40.648872  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:14:45.374249  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:15:01.321213  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:15:09.665666  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (233.204021ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-201745" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (211.013562ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-201745 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-266997 sudo iptables                       | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo docker                         | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo find                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo crio                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-266997                                     | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 22:00:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 22:00:40.013497  304371 out.go:345] Setting OutFile to fd 1 ...
	I0214 22:00:40.013688  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013723  304371 out.go:358] Setting ErrFile to fd 2...
	I0214 22:00:40.013740  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013941  304371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 22:00:40.014539  304371 out.go:352] Setting JSON to false
	I0214 22:00:40.015878  304371 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9784,"bootTime":1739560656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 22:00:40.015969  304371 start.go:140] virtualization: kvm guest
	I0214 22:00:40.017995  304371 out.go:177] * [bridge-266997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 22:00:40.019548  304371 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 22:00:40.019559  304371 notify.go:220] Checking for updates...
	I0214 22:00:40.021770  304371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 22:00:40.022963  304371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:00:40.024165  304371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.025322  304371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 22:00:40.026557  304371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 22:00:40.028422  304371 config.go:182] Loaded profile config "enable-default-cni-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028571  304371 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028707  304371 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 22:00:40.028816  304371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 22:00:40.075364  304371 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 22:00:40.076500  304371 start.go:304] selected driver: kvm2
	I0214 22:00:40.076529  304371 start.go:908] validating driver "kvm2" against <nil>
	I0214 22:00:40.076547  304371 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 22:00:40.077631  304371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.077721  304371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 22:00:40.097536  304371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 22:00:40.097586  304371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 22:00:40.097859  304371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:00:40.097901  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:00:40.097911  304371 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 22:00:40.097991  304371 start.go:347] cluster config:
	{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:40.098147  304371 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.099655  304371 out.go:177] * Starting "bridge-266997" primary control-plane node in "bridge-266997" cluster
	I0214 22:00:40.100707  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:40.100759  304371 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 22:00:40.100773  304371 cache.go:56] Caching tarball of preloaded images
	I0214 22:00:40.100872  304371 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 22:00:40.100888  304371 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 22:00:40.100998  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:00:40.101023  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json: {Name:mk956d7ec0a679c86c01d5e19aaca4ffe835db04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:40.101195  304371 start.go:360] acquireMachinesLock for bridge-266997: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 22:00:40.739410  304371 start.go:364] duration metric: took 638.071669ms to acquireMachinesLock for "bridge-266997"
	I0214 22:00:40.739470  304371 start.go:93] Provisioning new machine with config: &{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:00:40.739597  304371 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 22:00:38.638103  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638775  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has current primary IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638815  302662 main.go:141] libmachine: (flannel-266997) found domain IP: 192.168.61.227
	I0214 22:00:38.638837  302662 main.go:141] libmachine: (flannel-266997) reserving static IP address...
	I0214 22:00:38.639227  302662 main.go:141] libmachine: (flannel-266997) DBG | unable to find host DHCP lease matching {name: "flannel-266997", mac: "52:54:00:ee:24:91", ip: "192.168.61.227"} in network mk-flannel-266997
	I0214 22:00:38.720741  302662 main.go:141] libmachine: (flannel-266997) reserved static IP address 192.168.61.227 for domain flannel-266997
	I0214 22:00:38.720767  302662 main.go:141] libmachine: (flannel-266997) DBG | Getting to WaitForSSH function...
	I0214 22:00:38.720774  302662 main.go:141] libmachine: (flannel-266997) waiting for SSH...
	I0214 22:00:38.723657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724193  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.724222  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724376  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH client type: external
	I0214 22:00:38.724398  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa (-rw-------)
	I0214 22:00:38.724424  302662 main.go:141] libmachine: (flannel-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:00:38.724432  302662 main.go:141] libmachine: (flannel-266997) DBG | About to run SSH command:
	I0214 22:00:38.724443  302662 main.go:141] libmachine: (flannel-266997) DBG | exit 0
	I0214 22:00:38.855089  302662 main.go:141] libmachine: (flannel-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:00:38.855431  302662 main.go:141] libmachine: (flannel-266997) KVM machine creation complete
	I0214 22:00:38.855717  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:38.856304  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856540  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856736  302662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:00:38.856755  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:00:38.858099  302662 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:00:38.858126  302662 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:00:38.858133  302662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:00:38.858141  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.860473  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860742  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.860769  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860866  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.861047  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861239  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861397  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.861554  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.861789  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.861802  302662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:00:38.987056  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:38.987080  302662 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:00:38.987090  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.991287  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.991867  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.991901  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.992117  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.992347  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992546  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992737  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.992969  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.993199  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.993218  302662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:00:39.120019  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:00:39.120118  302662 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:00:39.120133  302662 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:00:39.120144  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120439  302662 buildroot.go:166] provisioning hostname "flannel-266997"
	I0214 22:00:39.120468  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120637  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.123699  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279544  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.279574  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279895  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.280156  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280385  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.280752  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.280990  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.281008  302662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-266997 && echo "flannel-266997" | sudo tee /etc/hostname
	I0214 22:00:39.418566  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-266997
	
	I0214 22:00:39.418600  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.696405  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.696786  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.696816  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.697106  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.697346  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697519  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697673  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.697837  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.698062  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.698079  302662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:00:39.838034  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:39.838073  302662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:00:39.838101  302662 buildroot.go:174] setting up certificates
	I0214 22:00:39.838118  302662 provision.go:84] configureAuth start
	I0214 22:00:39.838134  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.838437  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:39.841947  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842398  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.842423  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842549  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.845575  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846164  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.846413  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846385  302662 provision.go:143] copyHostCerts
	I0214 22:00:39.846558  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:00:39.846578  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:00:39.846685  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:00:39.846828  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:00:39.846841  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:00:39.846885  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:00:39.846995  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:00:39.847008  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:00:39.847066  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:00:39.847177  302662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.flannel-266997 san=[127.0.0.1 192.168.61.227 flannel-266997 localhost minikube]
	I0214 22:00:40.050848  302662 provision.go:177] copyRemoteCerts
	I0214 22:00:40.050928  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:00:40.050984  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.054657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055071  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.055100  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055790  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.056179  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.056663  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.056830  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.157340  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:00:40.184601  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0214 22:00:40.210273  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 22:00:40.235456  302662 provision.go:87] duration metric: took 397.323852ms to configureAuth
	I0214 22:00:40.235484  302662 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:00:40.235682  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.235775  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.238280  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238712  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.238751  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238935  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.239137  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239310  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239478  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.239662  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.239824  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.239838  302662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:00:40.477460  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:00:40.477495  302662 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:00:40.477529  302662 main.go:141] libmachine: (flannel-266997) Calling .GetURL
	I0214 22:00:40.478939  302662 main.go:141] libmachine: (flannel-266997) DBG | using libvirt version 6000000
	I0214 22:00:40.481396  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481778  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.481807  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481953  302662 main.go:141] libmachine: Docker is up and running!
	I0214 22:00:40.481977  302662 main.go:141] libmachine: Reticulating splines...
	I0214 22:00:40.481987  302662 client.go:171] duration metric: took 23.84148991s to LocalClient.Create
	I0214 22:00:40.482019  302662 start.go:167] duration metric: took 23.841568434s to libmachine.API.Create "flannel-266997"
	I0214 22:00:40.482032  302662 start.go:293] postStartSetup for "flannel-266997" (driver="kvm2")
	I0214 22:00:40.482052  302662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:00:40.482086  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.482376  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:00:40.482407  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.484968  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485363  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.485394  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.485749  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.485890  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.486025  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.573729  302662 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:00:40.577977  302662 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:00:40.578003  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:00:40.578075  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:00:40.578180  302662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:00:40.578302  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:00:40.588072  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:40.612075  302662 start.go:296] duration metric: took 130.020062ms for postStartSetup
	I0214 22:00:40.612132  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:40.612708  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.615427  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.615734  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.615764  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.616036  302662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/config.json ...
	I0214 22:00:40.616256  302662 start.go:128] duration metric: took 23.993767271s to createHost
	I0214 22:00:40.616279  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.618824  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619145  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.619172  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619365  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.619515  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619667  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619812  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.619942  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.620120  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.620135  302662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:00:40.739233  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570440.696234424
	
	I0214 22:00:40.739258  302662 fix.go:216] guest clock: 1739570440.696234424
	I0214 22:00:40.739268  302662 fix.go:229] Guest: 2025-02-14 22:00:40.696234424 +0000 UTC Remote: 2025-02-14 22:00:40.616269623 +0000 UTC m=+24.118806419 (delta=79.964801ms)
	I0214 22:00:40.739303  302662 fix.go:200] guest clock delta is within tolerance: 79.964801ms
	I0214 22:00:40.739310  302662 start.go:83] releasing machines lock for "flannel-266997", held for 24.116939765s
	I0214 22:00:40.739341  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.739624  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.742553  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.742948  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.742975  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.743235  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743808  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743985  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.744102  302662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:00:40.744175  302662 ssh_runner.go:195] Run: cat /version.json
	I0214 22:00:40.744198  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.744177  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.747113  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747256  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747420  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747485  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747553  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.747704  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.747663  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747759  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747849  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.747915  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.748050  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.748071  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.748190  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.748337  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.836766  302662 ssh_runner.go:195] Run: systemctl --version
	I0214 22:00:40.864976  302662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:00:41.030697  302662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:00:41.037406  302662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:00:41.037479  302662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:00:41.054755  302662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:00:41.054780  302662 start.go:495] detecting cgroup driver to use...
	I0214 22:00:41.054846  302662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:00:41.070471  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:00:41.085648  302662 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:00:41.085703  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:00:41.101988  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:00:41.118492  302662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:00:41.258887  302662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:00:41.416252  302662 docker.go:233] disabling docker service ...
	I0214 22:00:41.416318  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:00:41.433330  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:00:41.447924  302662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W0214 22:00:36.876425  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:36.876444  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:36.876460  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:36.954714  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:36.954740  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:39.500037  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:39.520812  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:39.520889  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:39.562216  296043 cri.go:89] found id: ""
	I0214 22:00:39.562250  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.562263  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:39.562271  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:39.562336  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:39.601201  296043 cri.go:89] found id: ""
	I0214 22:00:39.601234  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.601247  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:39.601255  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:39.601315  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:39.640202  296043 cri.go:89] found id: ""
	I0214 22:00:39.640231  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.640242  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:39.640250  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:39.640307  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:39.674932  296043 cri.go:89] found id: ""
	I0214 22:00:39.674960  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.674972  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:39.674981  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:39.675042  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:39.724788  296043 cri.go:89] found id: ""
	I0214 22:00:39.724820  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.724833  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:39.724841  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:39.724908  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:39.771267  296043 cri.go:89] found id: ""
	I0214 22:00:39.771295  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.771306  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:39.771314  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:39.771369  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:39.810824  296043 cri.go:89] found id: ""
	I0214 22:00:39.810852  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.810864  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:39.810871  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:39.810933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:39.852769  296043 cri.go:89] found id: ""
	I0214 22:00:39.852794  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.852803  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:39.852815  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:39.852831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:39.906779  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:39.906808  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:39.924045  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:39.924072  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:40.027558  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:40.027580  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:40.027594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:40.130386  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:40.130415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:41.665522  302662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:00:41.808101  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:00:41.827287  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:00:41.846475  302662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:00:41.846535  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.858296  302662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:00:41.858365  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.871564  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.892941  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.914718  302662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:00:41.929404  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.943358  302662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.967621  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.981572  302662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:00:41.993282  302662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:00:41.993338  302662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:00:42.007298  302662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:00:42.020823  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:42.168987  302662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:00:42.522679  302662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:00:42.522753  302662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:00:42.527926  302662 start.go:563] Will wait 60s for crictl version
	I0214 22:00:42.528000  302662 ssh_runner.go:195] Run: which crictl
	I0214 22:00:42.532262  302662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:00:42.583646  302662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:00:42.583793  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.613308  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.651554  302662 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:00:40.740919  304371 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0214 22:00:40.741156  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:00:40.741214  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:00:40.758664  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0214 22:00:40.759104  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:00:40.759684  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:00:40.759711  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:00:40.760116  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:00:40.760351  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:00:40.760523  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:00:40.760689  304371 start.go:159] libmachine.API.Create for "bridge-266997" (driver="kvm2")
	I0214 22:00:40.760732  304371 client.go:168] LocalClient.Create starting
	I0214 22:00:40.760769  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 22:00:40.760801  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760820  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760889  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 22:00:40.760925  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760947  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760973  304371 main.go:141] libmachine: Running pre-create checks...
	I0214 22:00:40.760985  304371 main.go:141] libmachine: (bridge-266997) Calling .PreCreateCheck
	I0214 22:00:40.761428  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:00:40.761930  304371 main.go:141] libmachine: Creating machine...
	I0214 22:00:40.761945  304371 main.go:141] libmachine: (bridge-266997) Calling .Create
	I0214 22:00:40.762102  304371 main.go:141] libmachine: (bridge-266997) creating KVM machine...
	I0214 22:00:40.762121  304371 main.go:141] libmachine: (bridge-266997) creating network...
	I0214 22:00:40.763213  304371 main.go:141] libmachine: (bridge-266997) DBG | found existing default KVM network
	I0214 22:00:40.764445  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.764318  304393 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:fa:84} reservation:<nil>}
	I0214 22:00:40.765726  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.765653  304393 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266bc0}
	I0214 22:00:40.765754  304371 main.go:141] libmachine: (bridge-266997) DBG | created network xml: 
	I0214 22:00:40.765764  304371 main.go:141] libmachine: (bridge-266997) DBG | <network>
	I0214 22:00:40.765774  304371 main.go:141] libmachine: (bridge-266997) DBG |   <name>mk-bridge-266997</name>
	I0214 22:00:40.765780  304371 main.go:141] libmachine: (bridge-266997) DBG |   <dns enable='no'/>
	I0214 22:00:40.765786  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765794  304371 main.go:141] libmachine: (bridge-266997) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0214 22:00:40.765810  304371 main.go:141] libmachine: (bridge-266997) DBG |     <dhcp>
	I0214 22:00:40.765819  304371 main.go:141] libmachine: (bridge-266997) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0214 22:00:40.765830  304371 main.go:141] libmachine: (bridge-266997) DBG |     </dhcp>
	I0214 22:00:40.765836  304371 main.go:141] libmachine: (bridge-266997) DBG |   </ip>
	I0214 22:00:40.765843  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765848  304371 main.go:141] libmachine: (bridge-266997) DBG | </network>
	I0214 22:00:40.765856  304371 main.go:141] libmachine: (bridge-266997) DBG | 
	I0214 22:00:40.770689  304371 main.go:141] libmachine: (bridge-266997) DBG | trying to create private KVM network mk-bridge-266997 192.168.50.0/24...
	I0214 22:00:40.854522  304371 main.go:141] libmachine: (bridge-266997) DBG | private KVM network mk-bridge-266997 192.168.50.0/24 created
	I0214 22:00:40.854555  304371 main.go:141] libmachine: (bridge-266997) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:40.854570  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.854493  304393 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.854582  304371 main.go:141] libmachine: (bridge-266997) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 22:00:40.854672  304371 main.go:141] libmachine: (bridge-266997) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 22:00:41.215883  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.215729  304393 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa...
	I0214 22:00:41.309617  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309464  304393 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk...
	I0214 22:00:41.309654  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing magic tar header
	I0214 22:00:41.309668  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing SSH key tar header
	I0214 22:00:41.309681  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309616  304393 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:41.309770  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997
	I0214 22:00:41.309791  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 22:00:41.309807  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:41.309822  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 22:00:41.309835  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 (perms=drwx------)
	I0214 22:00:41.309848  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 22:00:41.309858  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 22:00:41.309871  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 22:00:41.309884  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 22:00:41.309910  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 22:00:41.309927  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 22:00:41.309938  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins
	I0214 22:00:41.309949  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home
	I0214 22:00:41.309959  304371 main.go:141] libmachine: (bridge-266997) DBG | skipping /home - not owner
	I0214 22:00:41.309969  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.311296  304371 main.go:141] libmachine: (bridge-266997) define libvirt domain using xml: 
	I0214 22:00:41.311319  304371 main.go:141] libmachine: (bridge-266997) <domain type='kvm'>
	I0214 22:00:41.311329  304371 main.go:141] libmachine: (bridge-266997)   <name>bridge-266997</name>
	I0214 22:00:41.311357  304371 main.go:141] libmachine: (bridge-266997)   <memory unit='MiB'>3072</memory>
	I0214 22:00:41.311407  304371 main.go:141] libmachine: (bridge-266997)   <vcpu>2</vcpu>
	I0214 22:00:41.311453  304371 main.go:141] libmachine: (bridge-266997)   <features>
	I0214 22:00:41.311464  304371 main.go:141] libmachine: (bridge-266997)     <acpi/>
	I0214 22:00:41.311473  304371 main.go:141] libmachine: (bridge-266997)     <apic/>
	I0214 22:00:41.311482  304371 main.go:141] libmachine: (bridge-266997)     <pae/>
	I0214 22:00:41.311492  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311501  304371 main.go:141] libmachine: (bridge-266997)   </features>
	I0214 22:00:41.311522  304371 main.go:141] libmachine: (bridge-266997)   <cpu mode='host-passthrough'>
	I0214 22:00:41.311533  304371 main.go:141] libmachine: (bridge-266997)   
	I0214 22:00:41.311543  304371 main.go:141] libmachine: (bridge-266997)   </cpu>
	I0214 22:00:41.311556  304371 main.go:141] libmachine: (bridge-266997)   <os>
	I0214 22:00:41.311566  304371 main.go:141] libmachine: (bridge-266997)     <type>hvm</type>
	I0214 22:00:41.311575  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='cdrom'/>
	I0214 22:00:41.311585  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='hd'/>
	I0214 22:00:41.311597  304371 main.go:141] libmachine: (bridge-266997)     <bootmenu enable='no'/>
	I0214 22:00:41.311604  304371 main.go:141] libmachine: (bridge-266997)   </os>
	I0214 22:00:41.311615  304371 main.go:141] libmachine: (bridge-266997)   <devices>
	I0214 22:00:41.311623  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='cdrom'>
	I0214 22:00:41.311640  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/boot2docker.iso'/>
	I0214 22:00:41.311651  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hdc' bus='scsi'/>
	I0214 22:00:41.311659  304371 main.go:141] libmachine: (bridge-266997)       <readonly/>
	I0214 22:00:41.311669  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311679  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='disk'>
	I0214 22:00:41.311691  304371 main.go:141] libmachine: (bridge-266997)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 22:00:41.311708  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk'/>
	I0214 22:00:41.311719  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hda' bus='virtio'/>
	I0214 22:00:41.311731  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311745  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311758  304371 main.go:141] libmachine: (bridge-266997)       <source network='mk-bridge-266997'/>
	I0214 22:00:41.311768  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311784  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311795  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311806  304371 main.go:141] libmachine: (bridge-266997)       <source network='default'/>
	I0214 22:00:41.311816  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311835  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311845  304371 main.go:141] libmachine: (bridge-266997)     <serial type='pty'>
	I0214 22:00:41.311854  304371 main.go:141] libmachine: (bridge-266997)       <target port='0'/>
	I0214 22:00:41.311863  304371 main.go:141] libmachine: (bridge-266997)     </serial>
	I0214 22:00:41.311871  304371 main.go:141] libmachine: (bridge-266997)     <console type='pty'>
	I0214 22:00:41.311882  304371 main.go:141] libmachine: (bridge-266997)       <target type='serial' port='0'/>
	I0214 22:00:41.311894  304371 main.go:141] libmachine: (bridge-266997)     </console>
	I0214 22:00:41.311904  304371 main.go:141] libmachine: (bridge-266997)     <rng model='virtio'>
	I0214 22:00:41.311913  304371 main.go:141] libmachine: (bridge-266997)       <backend model='random'>/dev/random</backend>
	I0214 22:00:41.311922  304371 main.go:141] libmachine: (bridge-266997)     </rng>
	I0214 22:00:41.311929  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311935  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311943  304371 main.go:141] libmachine: (bridge-266997)   </devices>
	I0214 22:00:41.311953  304371 main.go:141] libmachine: (bridge-266997) </domain>
	I0214 22:00:41.311963  304371 main.go:141] libmachine: (bridge-266997) 
	I0214 22:00:41.316746  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:64:b9:e2 in network default
	I0214 22:00:41.317498  304371 main.go:141] libmachine: (bridge-266997) starting domain...
	I0214 22:00:41.317522  304371 main.go:141] libmachine: (bridge-266997) ensuring networks are active...
	I0214 22:00:41.317534  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.318252  304371 main.go:141] libmachine: (bridge-266997) Ensuring network default is active
	I0214 22:00:41.318659  304371 main.go:141] libmachine: (bridge-266997) Ensuring network mk-bridge-266997 is active
	I0214 22:00:41.319251  304371 main.go:141] libmachine: (bridge-266997) getting domain XML...
	I0214 22:00:41.320056  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.741479  304371 main.go:141] libmachine: (bridge-266997) waiting for IP...
	I0214 22:00:41.742488  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.743161  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:41.743281  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.743162  304393 retry.go:31] will retry after 281.296096ms: waiting for domain to come up
	I0214 22:00:42.026644  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.027336  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.027373  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.027305  304393 retry.go:31] will retry after 320.245979ms: waiting for domain to come up
	I0214 22:00:42.348610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.349147  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.349189  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.349091  304393 retry.go:31] will retry after 386.466755ms: waiting for domain to come up
	I0214 22:00:42.737580  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.738183  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.738213  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.738129  304393 retry.go:31] will retry after 559.616616ms: waiting for domain to come up
	I0214 22:00:43.299023  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:43.299572  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:43.299604  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:43.299538  304393 retry.go:31] will retry after 737.634158ms: waiting for domain to come up
	I0214 22:00:44.038490  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.039152  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.039187  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.039125  304393 retry.go:31] will retry after 770.231832ms: waiting for domain to come up
	I0214 22:00:44.811167  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.811701  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.811735  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.811676  304393 retry.go:31] will retry after 1.145451756s: waiting for domain to come up
	I0214 22:00:42.652620  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:42.655747  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656123  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:42.656157  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656409  302662 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0214 22:00:42.660943  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:42.675829  302662 kubeadm.go:875] updating cluster {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:00:42.675939  302662 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:42.676015  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:42.716871  302662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:00:42.716942  302662 ssh_runner.go:195] Run: which lz4
	I0214 22:00:42.721755  302662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:00:42.726679  302662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:00:42.726706  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:00:44.256067  302662 crio.go:462] duration metric: took 1.53433582s to copy over tarball
	I0214 22:00:44.256172  302662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:00:42.679860  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:42.699140  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:42.699212  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:42.744951  296043 cri.go:89] found id: ""
	I0214 22:00:42.744980  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.744992  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:42.745002  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:42.745061  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:42.795928  296043 cri.go:89] found id: ""
	I0214 22:00:42.795960  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.795973  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:42.795981  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:42.796051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:42.850295  296043 cri.go:89] found id: ""
	I0214 22:00:42.850330  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.850344  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:42.850354  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:42.850427  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:42.913832  296043 cri.go:89] found id: ""
	I0214 22:00:42.913862  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.913874  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:42.913884  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:42.913947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:42.983499  296043 cri.go:89] found id: ""
	I0214 22:00:42.983589  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.983607  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:42.983615  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:42.983689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:43.037301  296043 cri.go:89] found id: ""
	I0214 22:00:43.037331  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.037343  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:43.037351  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:43.037419  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:43.084109  296043 cri.go:89] found id: ""
	I0214 22:00:43.084141  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.084153  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:43.084161  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:43.084233  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:43.139429  296043 cri.go:89] found id: ""
	I0214 22:00:43.139460  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.139473  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:43.139486  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:43.139503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:43.203986  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:43.204033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:43.221265  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:43.221297  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:43.326457  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:43.326485  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:43.326510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:43.450012  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:43.450053  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.020884  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:46.036692  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:46.036773  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:46.078455  296043 cri.go:89] found id: ""
	I0214 22:00:46.078496  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.078510  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:46.078521  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:46.078599  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:46.126385  296043 cri.go:89] found id: ""
	I0214 22:00:46.126418  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.126430  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:46.126438  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:46.126505  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:46.174790  296043 cri.go:89] found id: ""
	I0214 22:00:46.174823  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.174836  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:46.174844  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:46.174911  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:46.236219  296043 cri.go:89] found id: ""
	I0214 22:00:46.236264  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.236276  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:46.236284  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:46.236349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:46.279991  296043 cri.go:89] found id: ""
	I0214 22:00:46.280019  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.280031  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:46.280038  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:46.280112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:46.316834  296043 cri.go:89] found id: ""
	I0214 22:00:46.316866  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.316878  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:46.316887  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:46.316951  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:46.355156  296043 cri.go:89] found id: ""
	I0214 22:00:46.355183  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.355192  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:46.355198  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:46.355252  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:46.400157  296043 cri.go:89] found id: ""
	I0214 22:00:46.400184  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.400193  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:46.400204  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:46.400220  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.451755  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:46.451791  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:46.527757  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:46.527804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:46.544748  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:46.544789  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:46.629059  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:46.629085  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:46.629101  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:45.959707  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:45.960207  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:45.960270  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:45.960194  304393 retry.go:31] will retry after 1.00130128s: waiting for domain to come up
	I0214 22:00:46.962593  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:46.963008  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:46.963041  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:46.962955  304393 retry.go:31] will retry after 1.285042496s: waiting for domain to come up
	I0214 22:00:48.250543  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:48.250935  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:48.250965  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:48.250905  304393 retry.go:31] will retry after 1.446388395s: waiting for domain to come up
	I0214 22:00:49.698809  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:49.699471  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:49.699494  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:49.699386  304393 retry.go:31] will retry after 1.758522672s: waiting for domain to come up
	I0214 22:00:46.623241  302662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.367029567s)
	I0214 22:00:46.623279  302662 crio.go:469] duration metric: took 2.367170567s to extract the tarball
	I0214 22:00:46.623290  302662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:00:46.677690  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:46.722617  302662 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:00:46.722657  302662 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:00:46.722670  302662 kubeadm.go:926] updating node { 192.168.61.227 8443 v1.32.1 crio true true} ...
	I0214 22:00:46.722822  302662 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0214 22:00:46.722916  302662 ssh_runner.go:195] Run: crio config
	I0214 22:00:46.772485  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:46.772512  302662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:00:46.772537  302662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-266997 NodeName:flannel-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:00:46.772661  302662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:00:46.772737  302662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:00:46.784220  302662 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:00:46.784289  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:00:46.795155  302662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0214 22:00:46.811382  302662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:00:46.827059  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0214 22:00:46.843173  302662 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0214 22:00:46.846933  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:46.859321  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:46.987406  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:00:47.004349  302662 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997 for IP: 192.168.61.227
	I0214 22:00:47.004372  302662 certs.go:194] generating shared ca certs ...
	I0214 22:00:47.004394  302662 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.004581  302662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:00:47.004694  302662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:00:47.004720  302662 certs.go:256] generating profile certs ...
	I0214 22:00:47.004800  302662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key
	I0214 22:00:47.004820  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt with IP's: []
	I0214 22:00:47.107488  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt ...
	I0214 22:00:47.107515  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: {Name:mkcafc2c347155a87934cc2b1a02a2ae438963f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107679  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key ...
	I0214 22:00:47.107689  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key: {Name:mk4272dd225f468d379f0edd78b2d669ffde6d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107784  302662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247
	I0214 22:00:47.107805  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0214 22:00:47.253098  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 ...
	I0214 22:00:47.253126  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247: {Name:mk1eb945c33215ba17bdc46ffcf8840c7f3dd723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253276  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 ...
	I0214 22:00:47.253288  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247: {Name:mkaaf59e6a445fe3bbdd6b7d0c2fa8bb8ab97969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253362  302662 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt
	I0214 22:00:47.253431  302662 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key
	I0214 22:00:47.253483  302662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key
	I0214 22:00:47.253498  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt with IP's: []
	I0214 22:00:47.423779  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt ...
	I0214 22:00:47.423813  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt: {Name:mk6b216b0369b6fec0e56e8e85f07a87b56291e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.423984  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key ...
	I0214 22:00:47.423997  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key: {Name:mk7e5c6c7d7c32823cb9d28b264f6cfeaebe6642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.424190  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:00:47.424232  302662 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:00:47.424244  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:00:47.424269  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:00:47.424295  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:00:47.424323  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:00:47.424371  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:47.425017  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:00:47.450688  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:00:47.475301  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:00:47.506864  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:00:47.535303  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:00:47.558848  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:00:47.582259  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:00:47.605880  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 22:00:47.629346  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:00:47.655313  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:00:47.684140  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:00:47.711649  302662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:00:47.728204  302662 ssh_runner.go:195] Run: openssl version
	I0214 22:00:47.734993  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:00:47.745552  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.749952  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.750009  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.755881  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:00:47.766140  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:00:47.776438  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781213  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781254  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.788489  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:00:47.799309  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:00:47.809509  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.813957  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.814001  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.819446  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:00:47.829331  302662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:00:47.833329  302662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:00:47.833389  302662 kubeadm.go:392] StartCluster: {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:47.833488  302662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:00:47.833542  302662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:00:47.872065  302662 cri.go:89] found id: ""
	I0214 22:00:47.872175  302662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:00:47.886707  302662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:00:47.897518  302662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:00:47.906407  302662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:00:47.906422  302662 kubeadm.go:157] found existing configuration files:
	
	I0214 22:00:47.906468  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:00:47.917119  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:00:47.917169  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:00:47.927075  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:00:47.936360  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:00:47.936401  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:00:47.946326  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.958232  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:00:47.958271  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.970063  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:00:47.983821  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:00:47.983884  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:00:47.993655  302662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:00:48.149190  302662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:00:49.216868  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:49.235561  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:49.235639  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:49.291785  296043 cri.go:89] found id: ""
	I0214 22:00:49.291817  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.291830  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:49.291840  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:49.291901  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:49.340347  296043 cri.go:89] found id: ""
	I0214 22:00:49.340374  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.340385  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:49.340393  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:49.340446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:49.386999  296043 cri.go:89] found id: ""
	I0214 22:00:49.387030  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.387041  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:49.387048  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:49.387114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:49.433819  296043 cri.go:89] found id: ""
	I0214 22:00:49.433849  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.433861  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:49.433868  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:49.433930  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:49.477406  296043 cri.go:89] found id: ""
	I0214 22:00:49.477453  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.477467  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:49.477478  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:49.477560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:49.522581  296043 cri.go:89] found id: ""
	I0214 22:00:49.522618  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.522648  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:49.522657  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:49.522721  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:49.560370  296043 cri.go:89] found id: ""
	I0214 22:00:49.560399  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.560410  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:49.560418  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:49.560479  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:49.600705  296043 cri.go:89] found id: ""
	I0214 22:00:49.600738  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.600751  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:49.600765  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:49.600787  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:49.692921  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:49.693003  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:49.715093  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:49.715190  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:49.819499  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:49.819529  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:49.819546  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:49.955944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:49.955994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:51.459674  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:51.460265  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:51.460299  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:51.460228  304393 retry.go:31] will retry after 2.818661449s: waiting for domain to come up
	I0214 22:00:54.281066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:54.281541  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:54.281618  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:54.281543  304393 retry.go:31] will retry after 3.13231059s: waiting for domain to come up
	I0214 22:00:52.528580  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:52.545309  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:52.545394  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:52.587415  296043 cri.go:89] found id: ""
	I0214 22:00:52.587446  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.587458  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:52.587466  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:52.587534  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:52.647538  296043 cri.go:89] found id: ""
	I0214 22:00:52.647649  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.647668  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:52.647677  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:52.647749  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:52.700570  296043 cri.go:89] found id: ""
	I0214 22:00:52.700603  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.700615  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:52.700624  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:52.700687  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:52.740732  296043 cri.go:89] found id: ""
	I0214 22:00:52.740764  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.740775  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:52.740782  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:52.740846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:52.781456  296043 cri.go:89] found id: ""
	I0214 22:00:52.781491  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.781503  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:52.781512  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:52.781581  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:52.829342  296043 cri.go:89] found id: ""
	I0214 22:00:52.829380  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.829392  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:52.829400  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:52.829471  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:52.879000  296043 cri.go:89] found id: ""
	I0214 22:00:52.879033  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.879045  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:52.879053  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:52.879127  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:52.923620  296043 cri.go:89] found id: ""
	I0214 22:00:52.923667  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.923680  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:52.923698  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:52.923717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:53.052613  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:53.052665  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:53.105757  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:53.105848  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:53.188362  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:53.188408  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:53.210408  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:53.210462  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:53.308816  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:55.810467  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:55.825649  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:55.825701  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:55.861736  296043 cri.go:89] found id: ""
	I0214 22:00:55.861759  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.861769  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:55.861776  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:55.861826  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:55.903282  296043 cri.go:89] found id: ""
	I0214 22:00:55.903318  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.903330  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:55.903352  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:55.903423  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:55.948890  296043 cri.go:89] found id: ""
	I0214 22:00:55.948919  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.948930  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:55.948937  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:55.948992  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:55.994279  296043 cri.go:89] found id: ""
	I0214 22:00:55.994307  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.994316  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:55.994321  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:55.994376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:56.039497  296043 cri.go:89] found id: ""
	I0214 22:00:56.039539  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.039551  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:56.039563  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:56.039630  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:56.079255  296043 cri.go:89] found id: ""
	I0214 22:00:56.079284  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.079294  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:56.079303  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:56.079367  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:56.121581  296043 cri.go:89] found id: ""
	I0214 22:00:56.121610  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.121622  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:56.121630  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:56.121689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:56.175042  296043 cri.go:89] found id: ""
	I0214 22:00:56.175066  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.175076  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:56.175089  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:56.175103  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:56.229769  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:56.229804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:56.243975  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:56.244001  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:56.319958  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:56.319982  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:56.319996  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:56.406004  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:56.406031  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:58.451548  302662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:00:58.451629  302662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:00:58.451729  302662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:00:58.451841  302662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:00:58.451943  302662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:00:58.452016  302662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:00:58.453381  302662 out.go:235]   - Generating certificates and keys ...
	I0214 22:00:58.453484  302662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:00:58.453567  302662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:00:58.453655  302662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:00:58.453731  302662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:00:58.453819  302662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:00:58.453888  302662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:00:58.453955  302662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:00:58.454117  302662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454193  302662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:00:58.454361  302662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454457  302662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:00:58.454548  302662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:00:58.454610  302662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:00:58.454703  302662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:00:58.454782  302662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:00:58.454863  302662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:00:58.454943  302662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:00:58.455064  302662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:00:58.455162  302662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:00:58.455295  302662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:00:58.455393  302662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:00:58.457252  302662 out.go:235]   - Booting up control plane ...
	I0214 22:00:58.457378  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:00:58.457451  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:00:58.457518  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:00:58.457610  302662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:00:58.457721  302662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:00:58.457788  302662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:00:58.457914  302662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:00:58.458088  302662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:00:58.458149  302662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.319865ms
	I0214 22:00:58.458214  302662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:00:58.458290  302662 kubeadm.go:310] [api-check] The API server is healthy after 5.001402391s
	I0214 22:00:58.458460  302662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:00:58.458610  302662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:00:58.458708  302662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:00:58.458905  302662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:00:58.458986  302662 kubeadm.go:310] [bootstrap-token] Using token: i1fz0a.mthozpfw6j726kwk
	I0214 22:00:58.460106  302662 out.go:235]   - Configuring RBAC rules ...
	I0214 22:00:58.460212  302662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:00:58.460327  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:00:58.460501  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:00:58.460640  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:00:58.460789  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:00:58.460862  302662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:00:58.460961  302662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:00:58.460999  302662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:00:58.461050  302662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:00:58.461063  302662 kubeadm.go:310] 
	I0214 22:00:58.461122  302662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:00:58.461128  302662 kubeadm.go:310] 
	I0214 22:00:58.461201  302662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:00:58.461207  302662 kubeadm.go:310] 
	I0214 22:00:58.461228  302662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:00:58.461309  302662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:00:58.461378  302662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:00:58.461386  302662 kubeadm.go:310] 
	I0214 22:00:58.461462  302662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:00:58.461473  302662 kubeadm.go:310] 
	I0214 22:00:58.461518  302662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:00:58.461525  302662 kubeadm.go:310] 
	I0214 22:00:58.461568  302662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:00:58.461647  302662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:00:58.461725  302662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:00:58.461733  302662 kubeadm.go:310] 
	I0214 22:00:58.461811  302662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:00:58.461891  302662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:00:58.461898  302662 kubeadm.go:310] 
	I0214 22:00:58.462022  302662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462119  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:00:58.462141  302662 kubeadm.go:310] 	--control-plane 
	I0214 22:00:58.462144  302662 kubeadm.go:310] 
	I0214 22:00:58.462225  302662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:00:58.462241  302662 kubeadm.go:310] 
	I0214 22:00:58.462339  302662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462459  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:00:58.462474  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:58.463742  302662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0214 22:00:57.415007  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:57.415501  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:57.415568  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:57.415492  304393 retry.go:31] will retry after 5.136891997s: waiting for domain to come up
	I0214 22:00:58.464845  302662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 22:00:58.471373  302662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0214 22:00:58.471395  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0214 22:00:58.493635  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 22:00:59.054047  302662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:00:59.054126  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.054208  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-266997 minikube.k8s.io/updated_at=2025_02_14T22_00_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=flannel-266997 minikube.k8s.io/primary=true
	I0214 22:00:59.094360  302662 ops.go:34] apiserver oom_adj: -16
	I0214 22:00:59.226069  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.727014  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.226853  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.726232  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.226169  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:58.959819  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:58.975738  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:58.975799  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:59.016692  296043 cri.go:89] found id: ""
	I0214 22:00:59.016722  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.016734  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:59.016742  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:59.016794  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:59.056462  296043 cri.go:89] found id: ""
	I0214 22:00:59.056486  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.056495  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:59.056504  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:59.056554  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:59.102865  296043 cri.go:89] found id: ""
	I0214 22:00:59.102893  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.102904  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:59.102911  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:59.102977  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:59.139163  296043 cri.go:89] found id: ""
	I0214 22:00:59.139189  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.139199  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:59.139204  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:59.139256  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:59.184113  296043 cri.go:89] found id: ""
	I0214 22:00:59.184142  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.184153  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:59.184160  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:59.184226  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:59.231073  296043 cri.go:89] found id: ""
	I0214 22:00:59.231104  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.231113  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:59.231123  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:59.231304  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:59.284699  296043 cri.go:89] found id: ""
	I0214 22:00:59.284723  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.284733  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:59.284741  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:59.284793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:59.337079  296043 cri.go:89] found id: ""
	I0214 22:00:59.337100  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.337107  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:59.337116  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:59.337133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:59.410337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:59.410365  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:59.410380  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:59.492678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:59.492710  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:59.535993  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:59.536022  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:59.596863  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:59.596889  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:01.726818  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.829407  302662 kubeadm.go:1105] duration metric: took 2.775341982s to wait for elevateKubeSystemPrivileges
	I0214 22:01:01.829439  302662 kubeadm.go:394] duration metric: took 13.996054167s to StartCluster
	I0214 22:01:01.829456  302662 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.829525  302662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:01.831145  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.831377  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:01.831394  302662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:01.831459  302662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:01.831554  302662 addons.go:69] Setting storage-provisioner=true in profile "flannel-266997"
	I0214 22:01:01.831572  302662 addons.go:238] Setting addon storage-provisioner=true in "flannel-266997"
	I0214 22:01:01.831603  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.831596  302662 addons.go:69] Setting default-storageclass=true in profile "flannel-266997"
	I0214 22:01:01.831628  302662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-266997"
	I0214 22:01:01.831660  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:01.832023  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832059  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832025  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832148  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832802  302662 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:01.833905  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:01.852906  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0214 22:01:01.853018  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0214 22:01:01.853380  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853592  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853990  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854005  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854121  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854144  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854347  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854575  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854851  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.854853  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.854886  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.858344  302662 addons.go:238] Setting addon default-storageclass=true in "flannel-266997"
	I0214 22:01:01.858420  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.858836  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.858889  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.870725  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0214 22:01:01.871213  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.871699  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.871721  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.872069  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.872261  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.873845  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.875386  302662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:01.876555  302662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:01.876577  302662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:01.876594  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.879497  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.879905  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.879931  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.880082  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.880247  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0214 22:01:01.880408  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.880539  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.880643  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:01.880960  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.881434  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.881453  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.881864  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.882412  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.882463  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.898239  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0214 22:01:01.898679  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.899246  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.899268  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.899656  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.899837  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.901209  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.901385  302662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:01.901402  302662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:01.901419  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.903666  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.903938  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.904002  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.904165  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.904327  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.904465  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.904593  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:02.010213  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:02.068737  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:02.254658  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:02.280477  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:02.558819  302662 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:02.560262  302662 node_ready.go:35] waiting up to 15m0s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:03.001707  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001737  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.001737  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001748  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002000  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002015  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002024  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002031  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002103  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002117  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002126  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002133  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002253  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002271  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.004236  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.004250  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.004267  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.012492  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.012514  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.012788  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.012805  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.012820  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.014783  302662 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:02.553773  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554344  304371 main.go:141] libmachine: (bridge-266997) found domain IP: 192.168.50.81
	I0214 22:01:02.554373  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has current primary IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554391  304371 main.go:141] libmachine: (bridge-266997) reserving static IP address...
	I0214 22:01:02.554641  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find host DHCP lease matching {name: "bridge-266997", mac: "52:54:00:b2:15:b0", ip: "192.168.50.81"} in network mk-bridge-266997
	I0214 22:01:02.642992  304371 main.go:141] libmachine: (bridge-266997) DBG | Getting to WaitForSSH function...
	I0214 22:01:02.643034  304371 main.go:141] libmachine: (bridge-266997) reserved static IP address 192.168.50.81 for domain bridge-266997
	I0214 22:01:02.643044  304371 main.go:141] libmachine: (bridge-266997) waiting for SSH...
	I0214 22:01:02.646143  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646598  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.646647  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646923  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH client type: external
	I0214 22:01:02.646961  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa (-rw-------)
	I0214 22:01:02.647011  304371 main.go:141] libmachine: (bridge-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:01:02.647024  304371 main.go:141] libmachine: (bridge-266997) DBG | About to run SSH command:
	I0214 22:01:02.647035  304371 main.go:141] libmachine: (bridge-266997) DBG | exit 0
	I0214 22:01:02.788308  304371 main.go:141] libmachine: (bridge-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:01:02.788649  304371 main.go:141] libmachine: (bridge-266997) KVM machine creation complete
	I0214 22:01:02.789044  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:02.789606  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789750  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789927  304371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:01:02.789946  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:02.791392  304371 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:01:02.791405  304371 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:01:02.791410  304371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:01:02.791416  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.793977  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794285  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.794302  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794418  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.794553  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794709  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794828  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.794971  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.795189  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.795201  304371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:01:02.909895  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:02.909920  304371 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:01:02.909929  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.912696  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913040  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.913066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913200  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.913439  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913647  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913796  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.913932  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.914103  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.914113  304371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:01:03.028655  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:01:03.028744  304371 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:01:03.028760  304371 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:01:03.028776  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029006  304371 buildroot.go:166] provisioning hostname "bridge-266997"
	I0214 22:01:03.029030  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029238  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.032183  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032556  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.032589  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032715  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.032907  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033059  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033225  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.033391  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.033602  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.033619  304371 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-266997 && echo "bridge-266997" | sudo tee /etc/hostname
	I0214 22:01:03.166933  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-266997
	
	I0214 22:01:03.166960  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.169777  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170149  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.170173  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170404  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.170597  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170926  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.171070  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.171304  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.171325  304371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:01:03.303955  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:03.303990  304371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:01:03.304021  304371 buildroot.go:174] setting up certificates
	I0214 22:01:03.304040  304371 provision.go:84] configureAuth start
	I0214 22:01:03.304054  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.304376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:03.307438  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.307857  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.307885  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.308035  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.310496  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.310856  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.310903  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.311001  304371 provision.go:143] copyHostCerts
	I0214 22:01:03.311081  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:01:03.311103  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:01:03.311172  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:01:03.311315  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:01:03.311336  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:01:03.311374  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:01:03.311492  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:01:03.311506  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:01:03.311538  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:01:03.311643  304371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.bridge-266997 san=[127.0.0.1 192.168.50.81 bridge-266997 localhost minikube]
	I0214 22:01:03.424494  304371 provision.go:177] copyRemoteCerts
	I0214 22:01:03.424546  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:01:03.424572  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.426781  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427138  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.427178  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427331  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.427484  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.427596  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.427715  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.517135  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 22:01:03.547506  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:01:03.579546  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 22:01:03.608150  304371 provision.go:87] duration metric: took 304.098585ms to configureAuth
	I0214 22:01:03.608174  304371 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:01:03.608327  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:03.608399  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.610851  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611181  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.611213  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611355  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.611503  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611641  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611754  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.611923  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.612153  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.612174  304371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:01:03.877480  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:01:03.877509  304371 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:01:03.877519  304371 main.go:141] libmachine: (bridge-266997) Calling .GetURL
	I0214 22:01:03.878693  304371 main.go:141] libmachine: (bridge-266997) DBG | using libvirt version 6000000
	I0214 22:01:03.881358  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.881777  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.881808  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.882015  304371 main.go:141] libmachine: Docker is up and running!
	I0214 22:01:03.882031  304371 main.go:141] libmachine: Reticulating splines...
	I0214 22:01:03.882040  304371 client.go:171] duration metric: took 23.121294706s to LocalClient.Create
	I0214 22:01:03.882063  304371 start.go:167] duration metric: took 23.121376335s to libmachine.API.Create "bridge-266997"
	I0214 22:01:03.882075  304371 start.go:293] postStartSetup for "bridge-266997" (driver="kvm2")
	I0214 22:01:03.882086  304371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:01:03.882116  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:03.882342  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:01:03.882376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.884877  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885218  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.885239  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885378  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.885589  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.885735  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.885845  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.976177  304371 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:01:03.980618  304371 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:01:03.980646  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:01:03.980710  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:01:03.980821  304371 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:01:03.980943  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:01:03.991483  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:04.025466  304371 start.go:296] duration metric: took 143.372996ms for postStartSetup
	I0214 22:01:04.025536  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:04.026327  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.029635  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030033  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.030057  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030352  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:01:04.030586  304371 start.go:128] duration metric: took 23.29097433s to createHost
	I0214 22:01:04.030640  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.033610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.033973  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.033998  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.034160  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.034303  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034507  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034685  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.034832  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:04.035026  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:04.035041  304371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:01:04.164811  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570464.136926718
	
	I0214 22:01:04.164832  304371 fix.go:216] guest clock: 1739570464.136926718
	I0214 22:01:04.164842  304371 fix.go:229] Guest: 2025-02-14 22:01:04.136926718 +0000 UTC Remote: 2025-02-14 22:01:04.030601008 +0000 UTC m=+24.065400357 (delta=106.32571ms)
	I0214 22:01:04.164866  304371 fix.go:200] guest clock delta is within tolerance: 106.32571ms
	I0214 22:01:04.164873  304371 start.go:83] releasing machines lock for "bridge-266997", held for 23.425433669s
	I0214 22:01:04.164896  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.165166  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.170113  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170541  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.170570  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170778  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171367  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171550  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171638  304371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:01:04.171684  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.171762  304371 ssh_runner.go:195] Run: cat /version.json
	I0214 22:01:04.171789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.174819  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175456  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.175481  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175607  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.175712  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.175787  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.175855  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.180293  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180297  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.180332  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.180351  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180558  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.180770  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.180935  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.285108  304371 ssh_runner.go:195] Run: systemctl --version
	I0214 22:01:04.293451  304371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:01:04.463259  304371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:01:04.469147  304371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:01:04.469201  304371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:01:04.484729  304371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:01:04.484747  304371 start.go:495] detecting cgroup driver to use...
	I0214 22:01:04.484800  304371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:01:04.502450  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:01:04.515492  304371 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:01:04.515540  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:01:04.528128  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:01:04.540475  304371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:01:04.666826  304371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:01:04.822228  304371 docker.go:233] disabling docker service ...
	I0214 22:01:04.822296  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:01:04.835915  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:01:04.848421  304371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 22:01:04.978701  304371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:01:05.096321  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:01:05.109638  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:01:05.127245  304371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:01:05.127289  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.137128  304371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:01:05.137171  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.149215  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.161652  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.173632  304371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:01:05.184990  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.195432  304371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.211772  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.222080  304371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:01:05.231350  304371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:01:05.231393  304371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:01:05.244531  304371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:01:05.253659  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:05.368821  304371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:01:05.484555  304371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:01:05.484625  304371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:01:05.490439  304371 start.go:563] Will wait 60s for crictl version
	I0214 22:01:05.490512  304371 ssh_runner.go:195] Run: which crictl
	I0214 22:01:05.495575  304371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:01:05.546437  304371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:01:05.546517  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.585123  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.622891  304371 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:01:03.016157  302662 addons.go:514] duration metric: took 1.184704963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:03.064160  302662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-266997" context rescaled to 1 replicas
	W0214 22:01:04.565870  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:02.111615  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:02.130034  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:02.130098  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:02.167633  296043 cri.go:89] found id: ""
	I0214 22:01:02.167669  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.167679  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:02.167687  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:02.167754  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:02.206752  296043 cri.go:89] found id: ""
	I0214 22:01:02.206778  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.206787  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:02.206793  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:02.206848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:02.242991  296043 cri.go:89] found id: ""
	I0214 22:01:02.243021  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.243033  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:02.243045  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:02.243112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:02.284141  296043 cri.go:89] found id: ""
	I0214 22:01:02.284164  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.284172  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:02.284178  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:02.284217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:02.329547  296043 cri.go:89] found id: ""
	I0214 22:01:02.329570  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.329577  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:02.329583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:02.329627  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:02.370731  296043 cri.go:89] found id: ""
	I0214 22:01:02.370758  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.370769  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:02.370778  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:02.370834  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:02.419069  296043 cri.go:89] found id: ""
	I0214 22:01:02.419102  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.419114  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:02.419122  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:02.419199  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:02.464600  296043 cri.go:89] found id: ""
	I0214 22:01:02.464636  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.464655  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:02.464670  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:02.464690  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:02.480854  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:02.480890  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:02.572148  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:02.572175  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:02.572191  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:02.686587  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:02.686646  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:02.734413  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:02.734443  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.297012  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:05.310239  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:05.310303  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:05.344855  296043 cri.go:89] found id: ""
	I0214 22:01:05.344884  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.344895  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:05.344905  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:05.344962  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:05.390466  296043 cri.go:89] found id: ""
	I0214 22:01:05.390498  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.390510  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:05.390518  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:05.390575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:05.442562  296043 cri.go:89] found id: ""
	I0214 22:01:05.442598  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.442611  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:05.442619  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:05.442707  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:05.482534  296043 cri.go:89] found id: ""
	I0214 22:01:05.482562  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.482577  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:05.482583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:05.482659  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:05.526775  296043 cri.go:89] found id: ""
	I0214 22:01:05.526802  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.526813  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:05.526821  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:05.526887  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:05.566945  296043 cri.go:89] found id: ""
	I0214 22:01:05.566971  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.566979  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:05.566991  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:05.567050  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:05.610803  296043 cri.go:89] found id: ""
	I0214 22:01:05.610836  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.610849  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:05.610857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:05.610934  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:05.658446  296043 cri.go:89] found id: ""
	I0214 22:01:05.658475  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.658485  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:05.658497  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:05.658512  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:05.731902  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:05.731929  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:05.731942  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:05.842065  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:05.842098  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:05.903308  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:05.903343  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.975417  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:05.975516  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:05.623928  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:05.627346  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.627929  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:05.627961  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.628196  304371 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0214 22:01:05.633410  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:05.650954  304371 kubeadm.go:875] updating cluster {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:01:05.651104  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:01:05.651162  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:05.701425  304371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:01:05.701507  304371 ssh_runner.go:195] Run: which lz4
	I0214 22:01:05.712837  304371 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:01:05.718837  304371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:01:05.718870  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:01:07.256269  304371 crio.go:462] duration metric: took 1.543466683s to copy over tarball
	I0214 22:01:07.256357  304371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:01:09.695876  304371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.439479772s)
	I0214 22:01:09.695918  304371 crio.go:469] duration metric: took 2.439614211s to extract the tarball
	I0214 22:01:09.695928  304371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:01:09.733290  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:09.780117  304371 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:01:09.780140  304371 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:01:09.780160  304371 kubeadm.go:926] updating node { 192.168.50.81 8443 v1.32.1 crio true true} ...
	I0214 22:01:09.780281  304371 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0214 22:01:09.780367  304371 ssh_runner.go:195] Run: crio config
	I0214 22:01:09.827891  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:09.827918  304371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:01:09.827940  304371 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-266997 NodeName:bridge-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:01:09.828092  304371 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:01:09.828156  304371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:01:09.837899  304371 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:01:09.837957  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:01:09.847189  304371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0214 22:01:09.863880  304371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:01:09.881813  304371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0214 22:01:09.898828  304371 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0214 22:01:09.902526  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:09.914292  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:10.040048  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:10.057372  304371 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997 for IP: 192.168.50.81
	I0214 22:01:10.057391  304371 certs.go:194] generating shared ca certs ...
	I0214 22:01:10.057407  304371 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.057580  304371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:01:10.057639  304371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:01:10.057653  304371 certs.go:256] generating profile certs ...
	I0214 22:01:10.057737  304371 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key
	I0214 22:01:10.057770  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt with IP's: []
	I0214 22:01:10.492985  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt ...
	I0214 22:01:10.493014  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: {Name:mk0e9a544ab62bf3bac0aeef07e33db8d1284119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493211  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key ...
	I0214 22:01:10.493229  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key: {Name:mk822ad23de6909e3dcaa3a4b87a06fbdfba8176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493342  304371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201
	I0214 22:01:10.493362  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.81]
	I0214 22:01:10.673628  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 ...
	I0214 22:01:10.673651  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201: {Name:mka33ef1d0779dee85a1340cd519c438b531f8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673787  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 ...
	I0214 22:01:10.673801  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201: {Name:mk2bcfa59be0eef44107f0d874f0a177271d56dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673881  304371 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt
	I0214 22:01:10.673969  304371 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key
	I0214 22:01:10.674034  304371 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key
	I0214 22:01:10.674051  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt with IP's: []
	I0214 22:01:10.815875  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt ...
	I0214 22:01:10.815900  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt: {Name:mk07fc7632bf05ef6abf8667a18602d64842bf54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816040  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key ...
	I0214 22:01:10.816054  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key: {Name:mk49f50231c8caf0067f42cee0eef760808a4f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816226  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:01:10.816268  304371 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:01:10.816279  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:01:10.816311  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:01:10.816343  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:01:10.816367  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:01:10.816410  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:10.817057  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:01:10.849496  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:01:10.873071  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:01:10.898240  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:01:10.921216  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:01:10.944392  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:01:10.968476  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:01:10.994710  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 22:01:11.019089  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:01:11.041841  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:01:11.064672  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:01:11.087698  304371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:01:11.105733  304371 ssh_runner.go:195] Run: openssl version
	I0214 22:01:11.113022  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:01:11.124173  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128829  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128877  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.134956  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:01:11.145646  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:01:11.156620  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.160984  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.161023  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.166639  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:01:11.177621  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:01:11.189431  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193866  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193907  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.199670  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:01:11.210845  304371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:01:11.214693  304371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:01:11.214742  304371 kubeadm.go:392] StartCluster: {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:01:11.214826  304371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:01:11.214862  304371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:01:11.258711  304371 cri.go:89] found id: ""
	I0214 22:01:11.258765  304371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:01:11.269032  304371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:01:11.279047  304371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:01:11.288803  304371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:01:11.288822  304371 kubeadm.go:157] found existing configuration files:
	
	I0214 22:01:11.288862  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:01:11.298148  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:01:11.298188  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:01:11.307741  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:01:11.316856  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:01:11.316903  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:01:11.326555  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.335896  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:01:11.335935  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.345669  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:01:11.355306  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:01:11.355357  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:01:11.364907  304371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:01:11.427252  304371 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:01:11.427326  304371 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:01:11.531552  304371 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:01:11.531691  304371 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:01:11.531851  304371 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:01:11.543555  304371 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0214 22:01:07.185994  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:08.565172  302662 node_ready.go:49] node "flannel-266997" is "Ready"
	I0214 22:01:08.565220  302662 node_ready.go:38] duration metric: took 6.004932024s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:08.565240  302662 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:08.565299  302662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.602874  302662 api_server.go:72] duration metric: took 6.771445737s to wait for apiserver process to appear ...
	I0214 22:01:08.602902  302662 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:08.602925  302662 api_server.go:253] Checking apiserver healthz at https://192.168.61.227:8443/healthz ...
	I0214 22:01:08.611745  302662 api_server.go:279] https://192.168.61.227:8443/healthz returned 200:
	ok
	I0214 22:01:08.612774  302662 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:08.612800  302662 api_server.go:131] duration metric: took 9.890538ms to wait for apiserver health ...
	I0214 22:01:08.612810  302662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:08.617075  302662 system_pods.go:59] 7 kube-system pods found
	I0214 22:01:08.617117  302662 system_pods.go:61] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.617131  302662 system_pods.go:61] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.617140  302662 system_pods.go:61] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.617151  302662 system_pods.go:61] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.617162  302662 system_pods.go:61] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.617176  302662 system_pods.go:61] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.617187  302662 system_pods.go:61] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.617199  302662 system_pods.go:74] duration metric: took 4.381701ms to wait for pod list to return data ...
	I0214 22:01:08.617213  302662 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:08.620515  302662 default_sa.go:45] found service account: "default"
	I0214 22:01:08.620531  302662 default_sa.go:55] duration metric: took 3.308722ms for default service account to be created ...
	I0214 22:01:08.620537  302662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:08.628163  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.628196  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.628205  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.628217  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.628232  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.628242  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.628250  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.628261  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.628286  302662 retry.go:31] will retry after 229.157349ms: missing components: kube-dns
	I0214 22:01:08.862237  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.862283  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.862293  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.862304  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.862315  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.862322  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.862330  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.862346  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.862370  302662 retry.go:31] will retry after 313.437713ms: missing components: kube-dns
	I0214 22:01:09.180643  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.180698  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.180709  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.180720  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.180732  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.180741  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.180751  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.180762  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.180785  302662 retry.go:31] will retry after 300.968731ms: missing components: kube-dns
	I0214 22:01:09.485817  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.485866  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.485876  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.485888  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.485897  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.485903  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.485914  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.485919  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.485947  302662 retry.go:31] will retry after 439.51358ms: missing components: kube-dns
	I0214 22:01:09.929653  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.929691  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.929699  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.929711  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.929724  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.929734  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.929747  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.929753  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.929778  302662 retry.go:31] will retry after 485.567052ms: missing components: kube-dns
	I0214 22:01:10.418771  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:10.418804  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:10.418813  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:10.418823  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:10.418833  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:10.418840  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:10.418848  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:10.418856  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:10.418873  302662 retry.go:31] will retry after 756.594325ms: missing components: kube-dns
	I0214 22:01:11.179962  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:11.179995  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:11.180004  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:11.180012  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:11.180022  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:11.180032  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:11.180043  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:11.180052  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:11.180085  302662 retry.go:31] will retry after 1.009789241s: missing components: kube-dns
	I0214 22:01:08.494769  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.514374  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:08.514458  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:08.561822  296043 cri.go:89] found id: ""
	I0214 22:01:08.561850  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.561859  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:08.561865  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:08.561912  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:08.602005  296043 cri.go:89] found id: ""
	I0214 22:01:08.602038  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.602051  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:08.602059  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:08.602136  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:08.642584  296043 cri.go:89] found id: ""
	I0214 22:01:08.642612  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.642636  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:08.642647  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:08.642725  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:08.677455  296043 cri.go:89] found id: ""
	I0214 22:01:08.677490  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.677506  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:08.677514  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:08.677579  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:08.723982  296043 cri.go:89] found id: ""
	I0214 22:01:08.724032  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.724046  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:08.724056  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:08.724129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:08.775467  296043 cri.go:89] found id: ""
	I0214 22:01:08.775503  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.775516  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:08.775525  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:08.775587  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:08.820143  296043 cri.go:89] found id: ""
	I0214 22:01:08.820187  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.820209  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:08.820218  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:08.820289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:08.855406  296043 cri.go:89] found id: ""
	I0214 22:01:08.855437  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.855448  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:08.855460  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:08.855476  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:08.914025  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:08.914052  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:08.927679  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:08.927708  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:09.029673  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:09.029699  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:09.029717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:09.113311  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:09.113358  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:11.659812  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:11.673901  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:11.673974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:11.710824  296043 cri.go:89] found id: ""
	I0214 22:01:11.710856  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.710868  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:11.710877  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:11.710939  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:11.749955  296043 cri.go:89] found id: ""
	I0214 22:01:11.749996  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.750009  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:11.750034  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:11.750109  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:11.784268  296043 cri.go:89] found id: ""
	I0214 22:01:11.784296  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.784308  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:11.784317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:11.784381  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:11.565511  304371 out.go:235]   - Generating certificates and keys ...
	I0214 22:01:11.565641  304371 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:01:11.565736  304371 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:01:11.597156  304371 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:01:11.777564  304371 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:01:12.000290  304371 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:01:12.274579  304371 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:01:12.340720  304371 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:01:12.341077  304371 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.592390  304371 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:01:12.592731  304371 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.789172  304371 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:01:12.860794  304371 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:01:12.958408  304371 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:01:12.958673  304371 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:01:13.132122  304371 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:01:13.373236  304371 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:01:13.504795  304371 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:01:13.776085  304371 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:01:14.088313  304371 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:01:14.089020  304371 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:01:14.093447  304371 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:01:14.095224  304371 out.go:235]   - Booting up control plane ...
	I0214 22:01:14.095351  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:01:14.095464  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:01:14.095532  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:01:14.111383  304371 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:01:14.118029  304371 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:01:14.118117  304371 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:01:14.266373  304371 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:01:14.266491  304371 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:01:14.767156  304371 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.155046ms
	I0214 22:01:14.767269  304371 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:01:12.399215  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:12.399250  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:12.399257  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:12.399265  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:12.399271  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:12.399279  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:12.399285  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:12.399296  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:12.399322  302662 retry.go:31] will retry after 1.435229105s: missing components: kube-dns
	I0214 22:01:13.838510  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:13.838553  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:13.838563  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:13.838572  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:13.838579  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:13.838584  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:13.838590  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:13.838599  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:13.838619  302662 retry.go:31] will retry after 1.229976943s: missing components: kube-dns
	I0214 22:01:15.072944  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:15.072987  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:15.072997  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:15.073007  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:15.073017  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:15.073024  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:15.073034  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:15.073042  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:15.073077  302662 retry.go:31] will retry after 1.417685153s: missing components: kube-dns
	I0214 22:01:16.494415  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:16.494450  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:16.494456  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:16.494463  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:16.494467  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:16.494471  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:16.494475  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:16.494478  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:16.494495  302662 retry.go:31] will retry after 2.360792167s: missing components: kube-dns
	I0214 22:01:11.822362  296043 cri.go:89] found id: ""
	I0214 22:01:11.822387  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.822395  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:11.822401  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:11.822462  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:11.860753  296043 cri.go:89] found id: ""
	I0214 22:01:11.860778  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.860786  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:11.860791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:11.860833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:11.901670  296043 cri.go:89] found id: ""
	I0214 22:01:11.901697  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.901709  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:11.901717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:11.901779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:11.939194  296043 cri.go:89] found id: ""
	I0214 22:01:11.939220  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.939230  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:11.939236  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:11.939289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:11.973819  296043 cri.go:89] found id: ""
	I0214 22:01:11.973846  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.973857  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:11.973869  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:11.973882  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:12.052290  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:12.052321  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:12.099732  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:12.099775  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:12.163962  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:12.163994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:12.181579  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:12.181625  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:12.272639  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:14.774322  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:14.787244  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:14.787299  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:14.820977  296043 cri.go:89] found id: ""
	I0214 22:01:14.821011  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.821024  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:14.821034  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:14.821099  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:14.852858  296043 cri.go:89] found id: ""
	I0214 22:01:14.852879  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.852888  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:14.852893  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:14.852947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:14.896441  296043 cri.go:89] found id: ""
	I0214 22:01:14.896464  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.896475  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:14.896483  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:14.896535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:14.930673  296043 cri.go:89] found id: ""
	I0214 22:01:14.930700  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.930712  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:14.930719  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:14.930776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:14.972676  296043 cri.go:89] found id: ""
	I0214 22:01:14.972708  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.972721  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:14.972729  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:14.972797  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:15.009271  296043 cri.go:89] found id: ""
	I0214 22:01:15.009303  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.009314  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:15.009323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:15.009406  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:15.045975  296043 cri.go:89] found id: ""
	I0214 22:01:15.046007  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.046021  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:15.046029  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:15.046102  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:15.084924  296043 cri.go:89] found id: ""
	I0214 22:01:15.084956  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.084967  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:15.084980  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:15.084995  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:15.143553  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:15.143587  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:15.158649  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:15.158687  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:15.235319  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:15.235343  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:15.235363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:15.324951  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:15.324990  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:19.266915  304371 kubeadm.go:310] [api-check] The API server is healthy after 4.501226967s
	I0214 22:01:19.286682  304371 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:01:19.300140  304371 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:01:19.320686  304371 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:01:19.320946  304371 kubeadm.go:310] [mark-control-plane] Marking the node bridge-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:01:19.338179  304371 kubeadm.go:310] [bootstrap-token] Using token: 4eaob3.8jnji5hz23dblskn
	I0214 22:01:19.339524  304371 out.go:235]   - Configuring RBAC rules ...
	I0214 22:01:19.339671  304371 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:01:19.345535  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:01:19.356239  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:01:19.363770  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:01:19.366981  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:01:19.371513  304371 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:01:19.672166  304371 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:01:20.099981  304371 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:01:20.669741  304371 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:01:20.671058  304371 kubeadm.go:310] 
	I0214 22:01:20.671186  304371 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:01:20.671210  304371 kubeadm.go:310] 
	I0214 22:01:20.671373  304371 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:01:20.671393  304371 kubeadm.go:310] 
	I0214 22:01:20.671428  304371 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:01:20.671511  304371 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:01:20.671588  304371 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:01:20.671598  304371 kubeadm.go:310] 
	I0214 22:01:20.671681  304371 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:01:20.671694  304371 kubeadm.go:310] 
	I0214 22:01:20.671769  304371 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:01:20.671784  304371 kubeadm.go:310] 
	I0214 22:01:20.671862  304371 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:01:20.671971  304371 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:01:20.672051  304371 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:01:20.672059  304371 kubeadm.go:310] 
	I0214 22:01:20.672173  304371 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:01:20.672270  304371 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:01:20.672278  304371 kubeadm.go:310] 
	I0214 22:01:20.672403  304371 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.672552  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:01:20.672586  304371 kubeadm.go:310] 	--control-plane 
	I0214 22:01:20.672596  304371 kubeadm.go:310] 
	I0214 22:01:20.672722  304371 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:01:20.672757  304371 kubeadm.go:310] 
	I0214 22:01:20.672884  304371 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.673034  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:01:20.673551  304371 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:01:20.673583  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:20.674803  304371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 22:01:18.859941  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:18.859975  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:18.859981  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:18.859987  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:18.859991  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:18.859996  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:18.860000  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:18.860004  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:18.860019  302662 retry.go:31] will retry after 2.716114002s: missing components: kube-dns
	I0214 22:01:17.869522  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:17.886022  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:17.886114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:17.926259  296043 cri.go:89] found id: ""
	I0214 22:01:17.926287  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.926296  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:17.926302  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:17.926358  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:17.989648  296043 cri.go:89] found id: ""
	I0214 22:01:17.989675  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.989683  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:17.989689  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:17.989744  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:18.041262  296043 cri.go:89] found id: ""
	I0214 22:01:18.041295  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.041307  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:18.041315  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:18.041380  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:18.080028  296043 cri.go:89] found id: ""
	I0214 22:01:18.080059  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.080069  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:18.080075  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:18.080134  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:18.116135  296043 cri.go:89] found id: ""
	I0214 22:01:18.116163  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.116172  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:18.116179  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:18.116239  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:18.148268  296043 cri.go:89] found id: ""
	I0214 22:01:18.148302  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.148315  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:18.148323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:18.148399  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:18.180352  296043 cri.go:89] found id: ""
	I0214 22:01:18.180378  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.180388  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:18.180394  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:18.180438  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:18.211513  296043 cri.go:89] found id: ""
	I0214 22:01:18.211534  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.211541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:18.211551  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:18.211562  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:18.260797  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:18.260831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:18.273477  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:18.273503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:18.340163  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:18.340182  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:18.340193  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:18.413927  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:18.413950  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:20.952238  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:20.964925  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:20.964984  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:21.000265  296043 cri.go:89] found id: ""
	I0214 22:01:21.000295  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.000306  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:21.000314  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:21.000376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:21.042754  296043 cri.go:89] found id: ""
	I0214 22:01:21.042780  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.042790  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:21.042798  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:21.042862  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:21.078636  296043 cri.go:89] found id: ""
	I0214 22:01:21.078664  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.078676  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:21.078684  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:21.078747  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:21.112023  296043 cri.go:89] found id: ""
	I0214 22:01:21.112050  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.112058  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:21.112067  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:21.112129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:21.147419  296043 cri.go:89] found id: ""
	I0214 22:01:21.147451  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.147462  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:21.147470  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:21.147541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:21.180151  296043 cri.go:89] found id: ""
	I0214 22:01:21.180191  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.180201  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:21.180209  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:21.180271  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:21.215007  296043 cri.go:89] found id: ""
	I0214 22:01:21.215037  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.215049  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:21.215057  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:21.215122  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:21.247912  296043 cri.go:89] found id: ""
	I0214 22:01:21.247953  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.247964  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:21.247976  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:21.247992  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:21.300392  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:21.300429  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:21.313583  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:21.313604  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:21.381863  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:21.381888  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:21.381902  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:21.460562  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:21.460591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:21.580732  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:21.580767  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Running
	I0214 22:01:21.580773  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:21.580777  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:21.580781  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:21.580785  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:21.580789  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:21.580792  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:21.580800  302662 system_pods.go:126] duration metric: took 12.960258845s to wait for k8s-apps to be running ...
	I0214 22:01:21.580808  302662 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:21.580852  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:21.596764  302662 system_svc.go:56] duration metric: took 15.934258ms WaitForService to wait for kubelet
	I0214 22:01:21.596793  302662 kubeadm.go:578] duration metric: took 19.765370857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:21.596814  302662 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:21.601648  302662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:21.601680  302662 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:21.601700  302662 node_conditions.go:105] duration metric: took 4.879566ms to run NodePressure ...
	I0214 22:01:21.601715  302662 start.go:241] waiting for startup goroutines ...
	I0214 22:01:21.601731  302662 start.go:246] waiting for cluster config update ...
	I0214 22:01:21.601749  302662 start.go:255] writing updated cluster config ...
	I0214 22:01:21.602045  302662 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:21.607012  302662 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:21.610715  302662 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.619683  302662 pod_ready.go:94] pod "coredns-668d6bf9bc-vlb9g" is "Ready"
	I0214 22:01:21.619715  302662 pod_ready.go:86] duration metric: took 8.975726ms for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.621747  302662 pod_ready.go:83] waiting for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.625095  302662 pod_ready.go:94] pod "etcd-flannel-266997" is "Ready"
	I0214 22:01:21.625112  302662 pod_ready.go:86] duration metric: took 3.349739ms for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.626839  302662 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.630189  302662 pod_ready.go:94] pod "kube-apiserver-flannel-266997" is "Ready"
	I0214 22:01:21.630205  302662 pod_ready.go:86] duration metric: took 3.350537ms for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.631966  302662 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.010234  302662 pod_ready.go:94] pod "kube-controller-manager-flannel-266997" is "Ready"
	I0214 22:01:22.010258  302662 pod_ready.go:86] duration metric: took 378.271702ms for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.210925  302662 pod_ready.go:83] waiting for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.610516  302662 pod_ready.go:94] pod "kube-proxy-lnlt5" is "Ready"
	I0214 22:01:22.610544  302662 pod_ready.go:86] duration metric: took 399.590168ms for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.810190  302662 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210781  302662 pod_ready.go:94] pod "kube-scheduler-flannel-266997" is "Ready"
	I0214 22:01:23.210809  302662 pod_ready.go:86] duration metric: took 400.595935ms for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210825  302662 pod_ready.go:40] duration metric: took 1.603788898s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:23.254724  302662 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:23.256280  302662 out.go:177] * Done! kubectl is now configured to use "flannel-266997" cluster and "default" namespace by default
	I0214 22:01:20.675853  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 22:01:20.687674  304371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0214 22:01:20.710977  304371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:01:20.711051  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:20.711136  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-266997 minikube.k8s.io/updated_at=2025_02_14T22_01_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=bridge-266997 minikube.k8s.io/primary=true
	I0214 22:01:20.857437  304371 ops.go:34] apiserver oom_adj: -16
	I0214 22:01:20.857573  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.357978  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.858196  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.357909  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.858323  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.358263  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.858483  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.358410  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.857672  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.358214  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.477742  304371 kubeadm.go:1105] duration metric: took 4.766743198s to wait for elevateKubeSystemPrivileges
	I0214 22:01:25.477787  304371 kubeadm.go:394] duration metric: took 14.263049181s to StartCluster
	I0214 22:01:25.477813  304371 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.477894  304371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:25.479312  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.479566  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:25.479594  304371 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:25.479566  304371 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:25.479695  304371 addons.go:69] Setting default-storageclass=true in profile "bridge-266997"
	I0214 22:01:25.479721  304371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-266997"
	I0214 22:01:25.479683  304371 addons.go:69] Setting storage-provisioner=true in profile "bridge-266997"
	I0214 22:01:25.479825  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:25.479828  304371 addons.go:238] Setting addon storage-provisioner=true in "bridge-266997"
	I0214 22:01:25.479933  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.480344  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480370  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480383  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.480400  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.481183  304371 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:25.482440  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:25.495953  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42079
	I0214 22:01:25.495973  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0214 22:01:25.496360  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496536  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496851  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.496873  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497082  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.497104  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497237  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.497486  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.497490  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.498041  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.498075  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.500794  304371 addons.go:238] Setting addon default-storageclass=true in "bridge-266997"
	I0214 22:01:25.500829  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.501072  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.501096  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.512606  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0214 22:01:25.512964  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.513385  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.513407  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.513770  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.513947  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.515505  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.517101  304371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:25.518333  304371 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.518354  304371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:25.518373  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.520011  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0214 22:01:25.520422  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.520847  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.520869  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.521183  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.521437  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.521710  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.521753  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.521881  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.521906  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.522179  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.522387  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.522543  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.522708  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.535515  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0214 22:01:25.535896  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.536315  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.536343  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.536695  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.536861  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.538765  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.538948  304371 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:25.538962  304371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:25.538976  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.541815  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542297  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.542316  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542488  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.542694  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.542878  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.543023  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.709288  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:25.709340  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:25.818938  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.883618  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:26.231097  304371 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:26.232118  304371 node_ready.go:35] waiting up to 15m0s for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244261  304371 node_ready.go:49] node "bridge-266997" is "Ready"
	I0214 22:01:26.244293  304371 node_ready.go:38] duration metric: took 12.148864ms for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244325  304371 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:26.244387  304371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:26.454003  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454033  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454062  304371 api_server.go:72] duration metric: took 974.324958ms to wait for apiserver process to appear ...
	I0214 22:01:26.454104  304371 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:26.454137  304371 api_server.go:253] Checking apiserver healthz at https://192.168.50.81:8443/healthz ...
	I0214 22:01:26.454282  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454299  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454449  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454476  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454486  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454495  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454560  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454577  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454580  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.454586  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454600  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454869  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454887  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454929  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.457012  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.457107  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.457041  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.464354  304371 api_server.go:279] https://192.168.50.81:8443/healthz returned 200:
	ok
	I0214 22:01:26.465264  304371 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:26.465285  304371 api_server.go:131] duration metric: took 11.170116ms to wait for apiserver health ...
	I0214 22:01:26.465296  304371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:26.471233  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.471249  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.471450  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.471473  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.471853  304371 system_pods.go:59] 8 kube-system pods found
	I0214 22:01:26.471889  304371 system_pods.go:61] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471903  304371 system_pods.go:61] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471917  304371 system_pods.go:61] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.471930  304371 system_pods.go:61] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.471941  304371 system_pods.go:61] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.471957  304371 system_pods.go:61] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.471966  304371 system_pods.go:61] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.471979  304371 system_pods.go:61] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending
	I0214 22:01:26.471988  304371 system_pods.go:74] duration metric: took 6.684999ms to wait for pod list to return data ...
	I0214 22:01:26.472001  304371 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:26.472806  304371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:24.002770  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:24.015631  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:24.015700  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:24.051601  296043 cri.go:89] found id: ""
	I0214 22:01:24.051637  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.051649  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:24.051657  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:24.051710  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:24.084938  296043 cri.go:89] found id: ""
	I0214 22:01:24.084963  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.084971  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:24.084977  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:24.085019  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:24.118982  296043 cri.go:89] found id: ""
	I0214 22:01:24.119012  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.119023  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:24.119030  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:24.119091  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:24.150809  296043 cri.go:89] found id: ""
	I0214 22:01:24.150838  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.150849  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:24.150857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:24.150927  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:24.180499  296043 cri.go:89] found id: ""
	I0214 22:01:24.180527  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.180538  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:24.180546  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:24.180613  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:24.214503  296043 cri.go:89] found id: ""
	I0214 22:01:24.214531  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.214542  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:24.214550  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:24.214616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:24.250992  296043 cri.go:89] found id: ""
	I0214 22:01:24.251018  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.251026  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:24.251032  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:24.251090  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:24.287791  296043 cri.go:89] found id: ""
	I0214 22:01:24.287816  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.287824  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:24.287839  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:24.287854  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:24.324499  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:24.324533  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:24.373673  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:24.373700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:24.387527  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:24.387558  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:24.464362  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:24.464394  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:24.464409  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:26.474033  304371 addons.go:514] duration metric: took 994.441902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:26.476260  304371 default_sa.go:45] found service account: "default"
	I0214 22:01:26.476283  304371 default_sa.go:55] duration metric: took 4.273083ms for default service account to be created ...
	I0214 22:01:26.476293  304371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:26.480354  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.480386  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480397  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480410  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.480419  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.480429  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.480435  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.480445  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.480457  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.480479  304371 retry.go:31] will retry after 268.412371ms: missing components: kube-dns
	I0214 22:01:26.734480  304371 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-266997" context rescaled to 1 replicas
	I0214 22:01:26.752596  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.752625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752632  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752639  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.752645  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.752649  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.752654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.752663  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.752668  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.752683  304371 retry.go:31] will retry after 253.744271ms: missing components: kube-dns
	I0214 22:01:27.010128  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.010160  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010169  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010176  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.010182  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.010187  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.010190  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.010195  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.010200  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:27.010215  304371 retry.go:31] will retry after 373.755847ms: missing components: kube-dns
	I0214 22:01:27.387928  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.387976  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.387988  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.388001  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.388015  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.388022  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.388031  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.388040  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.388048  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.388073  304371 retry.go:31] will retry after 449.518817ms: missing components: kube-dns
	I0214 22:01:27.841591  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.841625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841633  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841640  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.841646  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.841650  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.841654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.841661  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.841664  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.841680  304371 retry.go:31] will retry after 522.702646ms: missing components: kube-dns
	I0214 22:01:28.368689  304371 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:28.368725  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:28.368733  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:28.368741  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:28.368746  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:28.368753  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:28.368761  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:28.368765  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:28.368774  304371 system_pods.go:126] duration metric: took 1.892474517s to wait for k8s-apps to be running ...
	I0214 22:01:28.368785  304371 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:28.368830  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:28.383657  304371 system_svc.go:56] duration metric: took 14.862939ms WaitForService to wait for kubelet
	I0214 22:01:28.383685  304371 kubeadm.go:578] duration metric: took 2.903970849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:28.383703  304371 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:28.387139  304371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:28.387163  304371 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:28.387176  304371 node_conditions.go:105] duration metric: took 3.468187ms to run NodePressure ...
	I0214 22:01:28.387187  304371 start.go:241] waiting for startup goroutines ...
	I0214 22:01:28.387200  304371 start.go:246] waiting for cluster config update ...
	I0214 22:01:28.387215  304371 start.go:255] writing updated cluster config ...
	I0214 22:01:28.387551  304371 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:28.391627  304371 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:28.395108  304371 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:27.040249  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:27.052990  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:27.053055  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:27.092109  296043 cri.go:89] found id: ""
	I0214 22:01:27.092138  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.092150  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:27.092158  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:27.092219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:27.128290  296043 cri.go:89] found id: ""
	I0214 22:01:27.128323  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.128336  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:27.128344  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:27.128413  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:27.166086  296043 cri.go:89] found id: ""
	I0214 22:01:27.166113  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.166121  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:27.166127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:27.166174  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:27.198082  296043 cri.go:89] found id: ""
	I0214 22:01:27.198114  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.198126  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:27.198133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:27.198196  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:27.229133  296043 cri.go:89] found id: ""
	I0214 22:01:27.229167  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.229182  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:27.229190  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:27.229253  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:27.267454  296043 cri.go:89] found id: ""
	I0214 22:01:27.267483  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.267495  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:27.267504  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:27.267570  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:27.306235  296043 cri.go:89] found id: ""
	I0214 22:01:27.306265  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.306277  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:27.306289  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:27.306368  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:27.337862  296043 cri.go:89] found id: ""
	I0214 22:01:27.337894  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.337905  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:27.337916  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:27.337928  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:27.384978  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:27.385007  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:27.398968  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:27.398999  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:27.468335  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:27.468363  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:27.468379  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:27.549329  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:27.549363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:30.097135  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:30.110653  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:30.110740  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:30.148484  296043 cri.go:89] found id: ""
	I0214 22:01:30.148518  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.148530  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:30.148538  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:30.148611  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:30.183761  296043 cri.go:89] found id: ""
	I0214 22:01:30.183791  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.183802  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:30.183809  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:30.183866  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:30.216232  296043 cri.go:89] found id: ""
	I0214 22:01:30.216260  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.216271  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:30.216278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:30.216346  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:30.248173  296043 cri.go:89] found id: ""
	I0214 22:01:30.248199  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.248210  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:30.248217  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:30.248281  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:30.283288  296043 cri.go:89] found id: ""
	I0214 22:01:30.283318  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.283329  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:30.283350  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:30.283402  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:30.324270  296043 cri.go:89] found id: ""
	I0214 22:01:30.324297  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.324308  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:30.324317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:30.324373  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:30.360122  296043 cri.go:89] found id: ""
	I0214 22:01:30.360146  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.360154  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:30.360159  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:30.360207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:30.394546  296043 cri.go:89] found id: ""
	I0214 22:01:30.394571  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.394580  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:30.394594  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:30.394613  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:30.449231  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:30.449258  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:30.463475  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:30.463499  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:30.536719  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:30.536746  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:30.536762  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:30.619446  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:30.619484  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:01:30.438589  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:32.924767  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:33.159018  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:33.176759  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:33.176842  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:33.216502  296043 cri.go:89] found id: ""
	I0214 22:01:33.216527  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.216536  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:33.216542  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:33.216597  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:33.254772  296043 cri.go:89] found id: ""
	I0214 22:01:33.254799  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.254810  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:33.254817  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:33.254878  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:33.287687  296043 cri.go:89] found id: ""
	I0214 22:01:33.287713  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.287722  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:33.287728  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:33.287790  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:33.319969  296043 cri.go:89] found id: ""
	I0214 22:01:33.319990  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.319997  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:33.320002  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:33.320046  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:33.352720  296043 cri.go:89] found id: ""
	I0214 22:01:33.352740  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.352747  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:33.352752  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:33.352807  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:33.390638  296043 cri.go:89] found id: ""
	I0214 22:01:33.390662  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.390671  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:33.390678  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:33.390730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:33.425935  296043 cri.go:89] found id: ""
	I0214 22:01:33.425954  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.425962  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:33.425967  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:33.426012  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:33.459671  296043 cri.go:89] found id: ""
	I0214 22:01:33.459695  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.459705  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:33.459716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:33.459730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:33.535469  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:33.535493  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:33.570473  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:33.570501  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:33.619720  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:33.619745  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:33.631829  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:33.631850  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:33.701637  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.202577  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:36.216700  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:36.216761  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:36.250764  296043 cri.go:89] found id: ""
	I0214 22:01:36.250789  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.250798  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:36.250804  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:36.250853  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:36.284811  296043 cri.go:89] found id: ""
	I0214 22:01:36.284838  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.284850  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:36.284857  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:36.284916  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:36.321197  296043 cri.go:89] found id: ""
	I0214 22:01:36.321219  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.321227  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:36.321235  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:36.321277  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:36.354869  296043 cri.go:89] found id: ""
	I0214 22:01:36.354896  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.354907  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:36.354915  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:36.354967  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:36.393688  296043 cri.go:89] found id: ""
	I0214 22:01:36.393712  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.393722  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:36.393730  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:36.393781  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:36.427985  296043 cri.go:89] found id: ""
	I0214 22:01:36.428006  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.428015  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:36.428023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:36.428076  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:36.458367  296043 cri.go:89] found id: ""
	I0214 22:01:36.458386  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.458393  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:36.458398  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:36.458446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:36.489038  296043 cri.go:89] found id: ""
	I0214 22:01:36.489061  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.489069  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:36.489080  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:36.489093  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:36.526950  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:36.526971  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:36.577258  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:36.577293  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:36.589545  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:36.589567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:36.658634  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.658656  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:36.658674  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0214 22:01:35.400875  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:37.900278  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:38.401005  304371 pod_ready.go:94] pod "coredns-668d6bf9bc-m2ggw" is "Ready"
	I0214 22:01:38.401031  304371 pod_ready.go:86] duration metric: took 10.005896118s for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.403160  304371 pod_ready.go:83] waiting for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.407295  304371 pod_ready.go:94] pod "etcd-bridge-266997" is "Ready"
	I0214 22:01:38.407320  304371 pod_ready.go:86] duration metric: took 4.131989ms for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.409214  304371 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.413019  304371 pod_ready.go:94] pod "kube-apiserver-bridge-266997" is "Ready"
	I0214 22:01:38.413047  304371 pod_ready.go:86] duration metric: took 3.813497ms for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.414707  304371 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.598300  304371 pod_ready.go:94] pod "kube-controller-manager-bridge-266997" is "Ready"
	I0214 22:01:38.598321  304371 pod_ready.go:86] duration metric: took 183.594312ms for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.799339  304371 pod_ready.go:83] waiting for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.198982  304371 pod_ready.go:94] pod "kube-proxy-xdwmc" is "Ready"
	I0214 22:01:39.199006  304371 pod_ready.go:86] duration metric: took 399.648451ms for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.400069  304371 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800157  304371 pod_ready.go:94] pod "kube-scheduler-bridge-266997" is "Ready"
	I0214 22:01:39.800184  304371 pod_ready.go:86] duration metric: took 400.072932ms for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800195  304371 pod_ready.go:40] duration metric: took 11.408545307s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:39.844662  304371 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:39.846593  304371 out.go:177] * Done! kubectl is now configured to use "bridge-266997" cluster and "default" namespace by default
	I0214 22:01:39.231339  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:39.244717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:39.244765  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:39.277734  296043 cri.go:89] found id: ""
	I0214 22:01:39.277756  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.277766  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:39.277773  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:39.277836  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:39.309896  296043 cri.go:89] found id: ""
	I0214 22:01:39.309916  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.309923  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:39.309931  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:39.309979  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:39.342579  296043 cri.go:89] found id: ""
	I0214 22:01:39.342608  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.342619  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:39.342637  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:39.342686  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:39.378083  296043 cri.go:89] found id: ""
	I0214 22:01:39.378112  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.378124  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:39.378134  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:39.378192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:39.414803  296043 cri.go:89] found id: ""
	I0214 22:01:39.414828  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.414842  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:39.414850  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:39.414904  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:39.449659  296043 cri.go:89] found id: ""
	I0214 22:01:39.449690  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.449702  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:39.449711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:39.449778  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:39.486261  296043 cri.go:89] found id: ""
	I0214 22:01:39.486288  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.486300  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:39.486308  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:39.486371  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:39.518224  296043 cri.go:89] found id: ""
	I0214 22:01:39.518245  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.518253  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:39.518264  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:39.518277  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:39.598112  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:39.598145  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:39.634704  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:39.634727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:39.685193  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:39.685217  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:39.697332  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:39.697355  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:39.773514  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.273720  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:42.290415  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:42.290491  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:42.329509  296043 cri.go:89] found id: ""
	I0214 22:01:42.329539  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.329549  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:42.329556  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:42.329616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:42.366218  296043 cri.go:89] found id: ""
	I0214 22:01:42.366247  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.366259  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:42.366267  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:42.366324  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:42.404603  296043 cri.go:89] found id: ""
	I0214 22:01:42.404627  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.404634  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:42.404641  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:42.404691  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:42.437980  296043 cri.go:89] found id: ""
	I0214 22:01:42.438008  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.438017  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:42.438023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:42.438072  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:42.470475  296043 cri.go:89] found id: ""
	I0214 22:01:42.470505  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.470517  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:42.470526  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:42.470592  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:42.503557  296043 cri.go:89] found id: ""
	I0214 22:01:42.503593  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.503606  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:42.503614  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:42.503681  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:42.537499  296043 cri.go:89] found id: ""
	I0214 22:01:42.537549  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.537559  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:42.537568  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:42.537629  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:42.581710  296043 cri.go:89] found id: ""
	I0214 22:01:42.581740  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.581752  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:42.581765  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:42.581785  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:42.594891  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:42.594920  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:42.675186  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.675207  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:42.675221  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:42.762000  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:42.762033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:42.813591  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:42.813644  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.368276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:45.383477  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:45.383541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:45.419199  296043 cri.go:89] found id: ""
	I0214 22:01:45.419226  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.419235  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:45.419242  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:45.419286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:45.457708  296043 cri.go:89] found id: ""
	I0214 22:01:45.457740  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.457752  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:45.457761  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:45.457831  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:45.497110  296043 cri.go:89] found id: ""
	I0214 22:01:45.497138  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.497146  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:45.497154  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:45.497220  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:45.534294  296043 cri.go:89] found id: ""
	I0214 22:01:45.534318  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.534326  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:45.534333  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:45.534392  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:45.575462  296043 cri.go:89] found id: ""
	I0214 22:01:45.575492  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.575504  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:45.575513  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:45.575573  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:45.615590  296043 cri.go:89] found id: ""
	I0214 22:01:45.615620  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.615631  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:45.615639  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:45.615694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:45.655779  296043 cri.go:89] found id: ""
	I0214 22:01:45.655813  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.655826  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:45.655834  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:45.655903  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:45.691350  296043 cri.go:89] found id: ""
	I0214 22:01:45.691376  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.691386  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:45.691395  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:45.691407  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.749784  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:45.749833  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:45.764193  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:45.764225  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:45.836887  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:45.836914  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:45.836930  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:45.943944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:45.943974  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.486718  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:48.500667  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:48.500730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:48.539749  296043 cri.go:89] found id: ""
	I0214 22:01:48.539775  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.539785  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:48.539794  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:48.539846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:48.576675  296043 cri.go:89] found id: ""
	I0214 22:01:48.576703  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.576714  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:48.576723  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:48.576776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:48.608593  296043 cri.go:89] found id: ""
	I0214 22:01:48.608618  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.608627  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:48.608634  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:48.608684  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:48.644181  296043 cri.go:89] found id: ""
	I0214 22:01:48.644210  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.644221  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:48.644228  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:48.644280  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:48.681188  296043 cri.go:89] found id: ""
	I0214 22:01:48.681214  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.681224  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:48.681232  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:48.681286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:48.719817  296043 cri.go:89] found id: ""
	I0214 22:01:48.719847  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.719857  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:48.719865  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:48.719922  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:48.756080  296043 cri.go:89] found id: ""
	I0214 22:01:48.756107  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.756119  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:48.756127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:48.756188  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:48.796664  296043 cri.go:89] found id: ""
	I0214 22:01:48.796692  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.796703  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:48.796716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:48.796730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:48.877633  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:48.877660  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.924693  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:48.924726  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:48.980014  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:48.980045  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:48.993129  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:48.993153  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:49.067409  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:51.568106  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:51.583193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:51.583254  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:51.620026  296043 cri.go:89] found id: ""
	I0214 22:01:51.620050  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.620058  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:51.620063  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:51.620120  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:51.654068  296043 cri.go:89] found id: ""
	I0214 22:01:51.654103  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.654114  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:51.654122  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:51.654176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:51.689022  296043 cri.go:89] found id: ""
	I0214 22:01:51.689047  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.689055  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:51.689062  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:51.689118  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:51.725479  296043 cri.go:89] found id: ""
	I0214 22:01:51.725503  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.725513  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:51.725524  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:51.725576  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:51.761617  296043 cri.go:89] found id: ""
	I0214 22:01:51.761644  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.761653  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:51.761660  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:51.761719  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:51.802942  296043 cri.go:89] found id: ""
	I0214 22:01:51.802963  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.802972  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:51.802979  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:51.803027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:51.843214  296043 cri.go:89] found id: ""
	I0214 22:01:51.843242  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.843252  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:51.843264  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:51.843316  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:51.910513  296043 cri.go:89] found id: ""
	I0214 22:01:51.910550  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.910562  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:51.910576  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:51.910594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:51.923639  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:51.923676  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:52.014337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:52.014366  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:52.014384  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:52.106586  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:52.106617  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:52.154349  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:52.154376  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:54.715843  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:54.729644  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:54.729694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:54.766181  296043 cri.go:89] found id: ""
	I0214 22:01:54.766200  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.766210  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:54.766216  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:54.766276  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:54.808010  296043 cri.go:89] found id: ""
	I0214 22:01:54.808039  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.808050  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:54.808064  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:54.808130  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:54.856672  296043 cri.go:89] found id: ""
	I0214 22:01:54.856693  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.856711  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:54.856717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:54.856762  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:54.906801  296043 cri.go:89] found id: ""
	I0214 22:01:54.906820  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.906827  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:54.906833  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:54.906873  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:54.951444  296043 cri.go:89] found id: ""
	I0214 22:01:54.951467  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.951477  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:54.951485  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:54.951539  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:54.993431  296043 cri.go:89] found id: ""
	I0214 22:01:54.993457  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.993468  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:54.993476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:54.993520  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:55.040664  296043 cri.go:89] found id: ""
	I0214 22:01:55.040714  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.040726  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:55.040735  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:55.040793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:55.080280  296043 cri.go:89] found id: ""
	I0214 22:01:55.080309  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.080317  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:55.080327  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:55.080342  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:55.141974  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:55.142012  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:55.159407  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:55.159436  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:55.238973  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:55.238998  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:55.239010  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:55.326876  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:55.326907  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:57.883816  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:57.898210  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:57.898270  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:57.933120  296043 cri.go:89] found id: ""
	I0214 22:01:57.933146  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.933155  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:57.933163  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:57.933219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:57.968047  296043 cri.go:89] found id: ""
	I0214 22:01:57.968072  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.968089  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:57.968096  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:57.968150  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:58.007167  296043 cri.go:89] found id: ""
	I0214 22:01:58.007194  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.007205  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:58.007213  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:58.007263  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:58.044221  296043 cri.go:89] found id: ""
	I0214 22:01:58.044249  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.044259  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:58.044270  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:58.044322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:58.079197  296043 cri.go:89] found id: ""
	I0214 22:01:58.079226  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.079237  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:58.079246  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:58.079308  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:58.115726  296043 cri.go:89] found id: ""
	I0214 22:01:58.115757  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.115768  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:58.115779  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:58.115833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:58.151192  296043 cri.go:89] found id: ""
	I0214 22:01:58.151218  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.151226  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:58.151231  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:58.151279  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:58.186512  296043 cri.go:89] found id: ""
	I0214 22:01:58.186531  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.186539  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:58.186548  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:58.186559  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:58.225500  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:58.225528  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:58.273842  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:58.273869  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:58.297373  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:58.297401  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:58.403111  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:58.403131  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:58.403155  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:00.996658  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:01.013323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:01.013388  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:01.054606  296043 cri.go:89] found id: ""
	I0214 22:02:01.054647  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.054659  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:01.054667  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:01.054729  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:01.091830  296043 cri.go:89] found id: ""
	I0214 22:02:01.091860  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.091870  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:01.091878  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:01.091933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:01.127100  296043 cri.go:89] found id: ""
	I0214 22:02:01.127126  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.127133  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:01.127139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:01.127176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:01.160268  296043 cri.go:89] found id: ""
	I0214 22:02:01.160291  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.160298  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:01.160304  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:01.160354  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:01.192244  296043 cri.go:89] found id: ""
	I0214 22:02:01.192277  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.192290  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:01.192301  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:01.192372  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:01.226746  296043 cri.go:89] found id: ""
	I0214 22:02:01.226777  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.226787  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:01.226797  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:01.226848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:01.264235  296043 cri.go:89] found id: ""
	I0214 22:02:01.264257  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.264266  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:01.264274  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:01.264325  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:01.299082  296043 cri.go:89] found id: ""
	I0214 22:02:01.299107  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.299119  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:01.299137  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:01.299152  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:01.374067  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:01.374087  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:01.374100  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:01.466814  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:01.466842  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:01.508566  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:01.508591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:01.565286  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:01.565318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.079276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:04.098100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:04.098168  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:04.148307  296043 cri.go:89] found id: ""
	I0214 22:02:04.148338  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.148347  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:04.148353  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:04.148401  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:04.182456  296043 cri.go:89] found id: ""
	I0214 22:02:04.182483  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.182493  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:04.182500  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:04.182548  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:04.222072  296043 cri.go:89] found id: ""
	I0214 22:02:04.222099  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.222107  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:04.222112  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:04.222155  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:04.255053  296043 cri.go:89] found id: ""
	I0214 22:02:04.255082  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.255092  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:04.255100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:04.255154  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:04.293951  296043 cri.go:89] found id: ""
	I0214 22:02:04.293982  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.293991  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:04.293998  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:04.294051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:04.334092  296043 cri.go:89] found id: ""
	I0214 22:02:04.334115  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.334123  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:04.334130  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:04.334179  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:04.366129  296043 cri.go:89] found id: ""
	I0214 22:02:04.366148  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.366160  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:04.366166  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:04.366207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:04.398508  296043 cri.go:89] found id: ""
	I0214 22:02:04.398532  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.398541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:04.398554  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:04.398567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:04.446518  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:04.446547  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.459347  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:04.459368  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:04.535181  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:04.535198  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:04.535212  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:04.608858  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:04.608891  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:07.150996  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:07.164414  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:07.164466  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:07.198549  296043 cri.go:89] found id: ""
	I0214 22:02:07.198571  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.198579  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:07.198585  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:07.198644  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:07.231429  296043 cri.go:89] found id: ""
	I0214 22:02:07.231454  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.231465  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:07.231472  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:07.231527  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:07.262244  296043 cri.go:89] found id: ""
	I0214 22:02:07.262266  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.262273  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:07.262278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:07.262322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:07.292654  296043 cri.go:89] found id: ""
	I0214 22:02:07.292670  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.292677  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:07.292686  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:07.292731  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:07.325893  296043 cri.go:89] found id: ""
	I0214 22:02:07.325911  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.325918  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:07.325923  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:07.325961  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:07.358776  296043 cri.go:89] found id: ""
	I0214 22:02:07.358799  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.358806  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:07.358811  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:07.358855  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:07.392029  296043 cri.go:89] found id: ""
	I0214 22:02:07.392052  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.392062  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:07.392073  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:07.392132  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:07.423080  296043 cri.go:89] found id: ""
	I0214 22:02:07.423105  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.423115  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:07.423128  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:07.423142  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:07.473625  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:07.473649  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:07.486487  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:07.486510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:07.550364  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:07.550387  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:07.550400  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:07.620727  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:07.620750  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:10.158575  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:10.171139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:10.171189  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:10.203796  296043 cri.go:89] found id: ""
	I0214 22:02:10.203825  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.203837  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:10.203847  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:10.203905  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:10.235261  296043 cri.go:89] found id: ""
	I0214 22:02:10.235279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.235287  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:10.235292  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:10.235331  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:10.267017  296043 cri.go:89] found id: ""
	I0214 22:02:10.267037  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.267044  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:10.267052  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:10.267110  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:10.298100  296043 cri.go:89] found id: ""
	I0214 22:02:10.298121  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.298127  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:10.298133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:10.298173  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:10.330163  296043 cri.go:89] found id: ""
	I0214 22:02:10.330189  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.330196  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:10.330205  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:10.330257  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:10.363253  296043 cri.go:89] found id: ""
	I0214 22:02:10.363279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.363287  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:10.363293  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:10.363345  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:10.393052  296043 cri.go:89] found id: ""
	I0214 22:02:10.393073  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.393081  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:10.393086  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:10.393124  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:10.423261  296043 cri.go:89] found id: ""
	I0214 22:02:10.423284  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.423292  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:10.423302  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:10.423314  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:10.474817  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:10.474839  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:10.487089  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:10.487117  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:10.552798  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:10.552818  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:10.552827  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:10.633678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:10.633700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:13.175779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:13.188862  296043 kubeadm.go:593] duration metric: took 4m4.534890262s to restartPrimaryControlPlane
	W0214 22:02:13.188929  296043 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0214 22:02:13.188953  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:02:14.903694  296043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.714713868s)
	I0214 22:02:14.903774  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:02:14.917520  296043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:02:14.927114  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:02:14.936531  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:02:14.936548  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:02:14.936593  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:02:14.945506  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:02:14.945543  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:02:14.954573  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:02:14.963268  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:02:14.963308  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:02:14.972385  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.981144  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:02:14.981190  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.990181  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:02:14.998739  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:02:14.998781  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:02:15.007880  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:02:15.079968  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:02:15.080063  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:02:15.227132  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:02:15.227264  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:02:15.227363  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:02:15.399613  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:02:15.401413  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:02:15.401514  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:02:15.401584  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:02:15.401699  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:02:15.401787  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:02:15.401887  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:02:15.403287  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:02:15.403395  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:02:15.403485  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:02:15.403584  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:02:15.403691  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:02:15.403760  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:02:15.403854  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:02:15.575946  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:02:15.646531  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:02:16.039563  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:02:16.210385  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:02:16.225322  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:02:16.226388  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:02:16.226445  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:02:16.354308  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:02:16.356102  296043 out.go:235]   - Booting up control plane ...
	I0214 22:02:16.356211  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:02:16.360283  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:02:16.361731  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:02:16.362515  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:02:16.373807  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:02:56.375481  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:02:56.376996  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:02:56.377215  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:01.377539  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:01.377722  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:11.378071  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:11.378255  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:31.379013  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:31.379253  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.380898  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:11.381134  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.381161  296043 kubeadm.go:310] 
	I0214 22:04:11.381223  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:04:11.381276  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:04:11.381287  296043 kubeadm.go:310] 
	I0214 22:04:11.381330  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:04:11.381386  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:04:11.381508  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:04:11.381517  296043 kubeadm.go:310] 
	I0214 22:04:11.381610  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:04:11.381661  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:04:11.381706  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:04:11.381713  296043 kubeadm.go:310] 
	I0214 22:04:11.381844  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:04:11.381962  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:04:11.381985  296043 kubeadm.go:310] 
	I0214 22:04:11.382159  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:04:11.382269  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:04:11.382378  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:04:11.382478  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:04:11.382488  296043 kubeadm.go:310] 
	I0214 22:04:11.383608  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:04:11.383712  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:04:11.383805  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 22:04:11.383962  296043 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 22:04:11.384029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:04:11.847932  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:04:11.862250  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:04:11.872076  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:04:11.872096  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:04:11.872141  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:04:11.881248  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:04:11.881299  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:04:11.890591  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:04:11.899561  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:04:11.899609  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:04:11.908818  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.917642  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:04:11.917688  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.926938  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:04:11.936007  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:04:11.936053  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:04:11.945314  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:04:12.015411  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:04:12.015466  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:04:12.151668  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:04:12.151844  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:04:12.151988  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:04:12.322327  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:04:12.324344  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:04:12.324451  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:04:12.324530  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:04:12.324659  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:04:12.324761  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:04:12.324855  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:04:12.324934  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:04:12.325109  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:04:12.325566  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:04:12.325866  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:04:12.326334  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:04:12.326391  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:04:12.326453  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:04:12.468450  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:04:12.741068  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:04:12.905628  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:04:13.075487  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:04:13.093105  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:04:13.093840  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:04:13.093897  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:04:13.225868  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:04:13.227602  296043 out.go:235]   - Booting up control plane ...
	I0214 22:04:13.227715  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:04:13.235626  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:04:13.238592  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:04:13.239495  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:04:13.246539  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:04:53.249274  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:04:53.249602  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:53.249764  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:58.250244  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:58.250486  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:08.251032  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:08.251247  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:28.253223  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:28.253527  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252450  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:06:08.252752  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252786  296043 kubeadm.go:310] 
	I0214 22:06:08.252841  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:06:08.252891  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:06:08.252909  296043 kubeadm.go:310] 
	I0214 22:06:08.252957  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:06:08.253010  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:06:08.253150  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:06:08.253160  296043 kubeadm.go:310] 
	I0214 22:06:08.253287  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:06:08.253332  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:06:08.253372  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:06:08.253403  296043 kubeadm.go:310] 
	I0214 22:06:08.253569  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:06:08.253692  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:06:08.253701  296043 kubeadm.go:310] 
	I0214 22:06:08.253861  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:06:08.253990  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:06:08.254095  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:06:08.254195  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:06:08.254206  296043 kubeadm.go:310] 
	I0214 22:06:08.254491  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:06:08.254637  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:06:08.254748  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 22:06:08.254848  296043 kubeadm.go:394] duration metric: took 7m59.662371118s to StartCluster
	I0214 22:06:08.254965  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:06:08.255027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:06:08.298673  296043 cri.go:89] found id: ""
	I0214 22:06:08.298694  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.298702  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:06:08.298709  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:06:08.298777  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:06:08.329697  296043 cri.go:89] found id: ""
	I0214 22:06:08.329717  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.329724  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:06:08.329729  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:06:08.329779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:06:08.360276  296043 cri.go:89] found id: ""
	I0214 22:06:08.360296  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.360304  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:06:08.360310  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:06:08.360370  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:06:08.391153  296043 cri.go:89] found id: ""
	I0214 22:06:08.391180  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.391188  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:06:08.391193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:06:08.391244  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:06:08.421880  296043 cri.go:89] found id: ""
	I0214 22:06:08.421907  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.421917  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:06:08.421924  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:06:08.421974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:06:08.453558  296043 cri.go:89] found id: ""
	I0214 22:06:08.453578  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.453587  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:06:08.453594  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:06:08.453641  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:06:08.495718  296043 cri.go:89] found id: ""
	I0214 22:06:08.495750  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.495761  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:06:08.495772  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:06:08.495845  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:06:08.542115  296043 cri.go:89] found id: ""
	I0214 22:06:08.542141  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.542152  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:06:08.542165  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:06:08.542180  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:06:08.605825  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:06:08.605851  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:06:08.621228  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:06:08.621251  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:06:08.696999  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:06:08.697025  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:06:08.697050  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:06:08.796690  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:06:08.796716  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:06:08.834010  296043 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 22:06:08.834068  296043 out.go:270] * 
	W0214 22:06:08.834153  296043 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.834166  296043 out.go:270] * 
	W0214 22:06:08.835011  296043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 22:06:08.838512  296043 out.go:201] 
	W0214 22:06:08.839577  296043 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.839631  296043 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 22:06:08.839655  296043 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 22:06:08.840885  296043 out.go:201] 
	
	
	==> CRI-O <==
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.655267075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571311655247263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2f50bcf-4161-4921-b949-c0ea41322f2e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.655849677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9becd8ba-d629-4306-8de4-fa9cc57b2a1b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.655889666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9becd8ba-d629-4306-8de4-fa9cc57b2a1b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.655921919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9becd8ba-d629-4306-8de4-fa9cc57b2a1b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.682369306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b940a24a-3871-48f7-8992-a65b1094cacf name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.682425083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b940a24a-3871-48f7-8992-a65b1094cacf name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.683812486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e39d83c-489d-4dd9-b823-ddb212f27b49 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.684141679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571311684122147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e39d83c-489d-4dd9-b823-ddb212f27b49 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.684906000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d50dcd8-6a34-4150-bd84-872d16fb2eaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.684961425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d50dcd8-6a34-4150-bd84-872d16fb2eaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.684997225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d50dcd8-6a34-4150-bd84-872d16fb2eaf name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.714915723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66029694-1b70-4b1b-843a-871aaa54ebb9 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.714971969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66029694-1b70-4b1b-843a-871aaa54ebb9 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.716754909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9df3acf1-f9f6-4830-8a2d-1760e7970171 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.717113952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571311717100440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9df3acf1-f9f6-4830-8a2d-1760e7970171 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.717701523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b05ff61e-0dd2-4d90-8dab-00b880502d3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.717746446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b05ff61e-0dd2-4d90-8dab-00b880502d3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.717772015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b05ff61e-0dd2-4d90-8dab-00b880502d3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.745737680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=77a2fd7b-ef50-4a75-8dec-702398bfeaac name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.745784483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=77a2fd7b-ef50-4a75-8dec-702398bfeaac name=/runtime.v1.RuntimeService/Version
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.746853793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97dc386d-01a0-48a2-a33e-4258c0ffef7b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.747257725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571311747239319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97dc386d-01a0-48a2-a33e-4258c0ffef7b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.747716359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2ed2bd7-0427-4604-a26c-670c89ec3cc5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.747762022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2ed2bd7-0427-4604-a26c-670c89ec3cc5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:15:11 old-k8s-version-201745 crio[638]: time="2025-02-14 22:15:11.747794293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e2ed2bd7-0427-4604-a26c-670c89ec3cc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb14 21:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060243] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046957] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.427674] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.890736] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.894421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.931911] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.056852] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063369] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.207712] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.154341] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[Feb14 21:58] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.870486] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.069737] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.465278] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +9.377456] kauditd_printk_skb: 46 callbacks suppressed
	[Feb14 22:02] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Feb14 22:04] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.064085] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:15:11 up 17 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux old-k8s-version-201745 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc000c013e0, 0x48ab5d6, 0x3, 0xc000b3c870, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c013e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b3c870, 0x24, 0x0, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net.(*Dialer).DialContext(0xc000259500, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3c870, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008de760, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3c870, 0x24, 0x60, 0x7f28a97bd538, 0x118, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net/http.(*Transport).dial(0xc000a85680, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b3c870, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net/http.(*Transport).dialConn(0xc000a85680, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c7e300, 0x5, 0xc000b3c870, 0x24, 0x0, 0xc000aa57a0, ...)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: net/http.(*Transport).dialConnFor(0xc000a85680, 0xc000ac9ad0)
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]: created by net/http.(*Transport).queueForDial
	Feb 14 22:15:08 old-k8s-version-201745 kubelet[6502]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 14 22:15:09 old-k8s-version-201745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 14 22:15:09 old-k8s-version-201745 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 22:15:09 old-k8s-version-201745 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 22:15:09 old-k8s-version-201745 kubelet[6511]: I0214 22:15:09.089407    6511 server.go:416] Version: v1.20.0
	Feb 14 22:15:09 old-k8s-version-201745 kubelet[6511]: I0214 22:15:09.089620    6511 server.go:837] Client rotation is on, will bootstrap in background
	Feb 14 22:15:09 old-k8s-version-201745 kubelet[6511]: I0214 22:15:09.091272    6511 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 22:15:09 old-k8s-version-201745 kubelet[6511]: W0214 22:15:09.092041    6511 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 14 22:15:09 old-k8s-version-201745 kubelet[6511]: I0214 22:15:09.092230    6511 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (227.317554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-201745" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (279.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:15:13.382314  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:15:38.597300  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:16:23.274746  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:16:24.386823  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:16:40.309191  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:16:42.290445  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:18:00.200290  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:18:04.860460  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:19:06.393730  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:19:23.263863  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
E0214 22:19:40.648296  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.19:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.19:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (233.223724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-201745" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-201745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-201745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.527µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-201745 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (222.155727ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-201745 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-266997 sudo iptables                       | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:01 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:01 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo docker                         | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo cat                            | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo                                | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo find                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-266997 sudo crio                           | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-266997                                     | bridge-266997 | jenkins | v1.35.0 | 14 Feb 25 22:02 UTC | 14 Feb 25 22:02 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 22:00:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 22:00:40.013497  304371 out.go:345] Setting OutFile to fd 1 ...
	I0214 22:00:40.013688  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013723  304371 out.go:358] Setting ErrFile to fd 2...
	I0214 22:00:40.013740  304371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 22:00:40.013941  304371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 22:00:40.014539  304371 out.go:352] Setting JSON to false
	I0214 22:00:40.015878  304371 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9784,"bootTime":1739560656,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 22:00:40.015969  304371 start.go:140] virtualization: kvm guest
	I0214 22:00:40.017995  304371 out.go:177] * [bridge-266997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 22:00:40.019548  304371 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 22:00:40.019559  304371 notify.go:220] Checking for updates...
	I0214 22:00:40.021770  304371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 22:00:40.022963  304371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:00:40.024165  304371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.025322  304371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 22:00:40.026557  304371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 22:00:40.028422  304371 config.go:182] Loaded profile config "enable-default-cni-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028571  304371 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.028707  304371 config.go:182] Loaded profile config "old-k8s-version-201745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 22:00:40.028816  304371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 22:00:40.075364  304371 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 22:00:40.076500  304371 start.go:304] selected driver: kvm2
	I0214 22:00:40.076529  304371 start.go:908] validating driver "kvm2" against <nil>
	I0214 22:00:40.076547  304371 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 22:00:40.077631  304371 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.077721  304371 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 22:00:40.097536  304371 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 22:00:40.097586  304371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 22:00:40.097859  304371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:00:40.097901  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:00:40.097911  304371 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 22:00:40.097991  304371 start.go:347] cluster config:
	{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:40.098147  304371 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 22:00:40.099655  304371 out.go:177] * Starting "bridge-266997" primary control-plane node in "bridge-266997" cluster
	I0214 22:00:40.100707  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:40.100759  304371 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 22:00:40.100773  304371 cache.go:56] Caching tarball of preloaded images
	I0214 22:00:40.100872  304371 preload.go:172] Found /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0214 22:00:40.100888  304371 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0214 22:00:40.100998  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:00:40.101023  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json: {Name:mk956d7ec0a679c86c01d5e19aaca4ffe835db04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:40.101195  304371 start.go:360] acquireMachinesLock for bridge-266997: {Name:mke9cb761c0a90adc4921ae56b4c4d949b22fb16 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0214 22:00:40.739410  304371 start.go:364] duration metric: took 638.071669ms to acquireMachinesLock for "bridge-266997"
	I0214 22:00:40.739470  304371 start.go:93] Provisioning new machine with config: &{Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:00:40.739597  304371 start.go:125] createHost starting for "" (driver="kvm2")
	I0214 22:00:38.638103  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638775  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has current primary IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.638815  302662 main.go:141] libmachine: (flannel-266997) found domain IP: 192.168.61.227
	I0214 22:00:38.638837  302662 main.go:141] libmachine: (flannel-266997) reserving static IP address...
	I0214 22:00:38.639227  302662 main.go:141] libmachine: (flannel-266997) DBG | unable to find host DHCP lease matching {name: "flannel-266997", mac: "52:54:00:ee:24:91", ip: "192.168.61.227"} in network mk-flannel-266997
	I0214 22:00:38.720741  302662 main.go:141] libmachine: (flannel-266997) reserved static IP address 192.168.61.227 for domain flannel-266997
	I0214 22:00:38.720767  302662 main.go:141] libmachine: (flannel-266997) DBG | Getting to WaitForSSH function...
	I0214 22:00:38.720774  302662 main.go:141] libmachine: (flannel-266997) waiting for SSH...
	I0214 22:00:38.723657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724193  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.724222  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.724376  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH client type: external
	I0214 22:00:38.724398  302662 main.go:141] libmachine: (flannel-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa (-rw-------)
	I0214 22:00:38.724424  302662 main.go:141] libmachine: (flannel-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:00:38.724432  302662 main.go:141] libmachine: (flannel-266997) DBG | About to run SSH command:
	I0214 22:00:38.724443  302662 main.go:141] libmachine: (flannel-266997) DBG | exit 0
	I0214 22:00:38.855089  302662 main.go:141] libmachine: (flannel-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:00:38.855431  302662 main.go:141] libmachine: (flannel-266997) KVM machine creation complete
	I0214 22:00:38.855717  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:38.856304  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856540  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:38.856736  302662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:00:38.856755  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:00:38.858099  302662 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:00:38.858126  302662 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:00:38.858133  302662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:00:38.858141  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.860473  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860742  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.860769  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.860866  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.861047  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861239  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.861397  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.861554  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.861789  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.861802  302662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:00:38.987056  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:38.987080  302662 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:00:38.987090  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:38.991287  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.991867  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:38.991901  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:38.992117  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:38.992347  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992546  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:38.992737  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:38.992969  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:38.993199  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:38.993218  302662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:00:39.120019  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:00:39.120118  302662 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:00:39.120133  302662 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:00:39.120144  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120439  302662 buildroot.go:166] provisioning hostname "flannel-266997"
	I0214 22:00:39.120468  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.120637  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.123699  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279544  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.279574  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.279895  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.280156  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280385  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.280554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.280752  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.280990  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.281008  302662 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-266997 && echo "flannel-266997" | sudo tee /etc/hostname
	I0214 22:00:39.418566  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-266997
	
	I0214 22:00:39.418600  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.696405  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.696786  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.696816  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.697106  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:39.697346  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697519  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:39.697673  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:39.697837  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:39.698062  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:39.698079  302662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:00:39.838034  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:00:39.838073  302662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:00:39.838101  302662 buildroot.go:174] setting up certificates
	I0214 22:00:39.838118  302662 provision.go:84] configureAuth start
	I0214 22:00:39.838134  302662 main.go:141] libmachine: (flannel-266997) Calling .GetMachineName
	I0214 22:00:39.838437  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:39.841947  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842398  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.842423  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.842549  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:39.845575  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846164  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:39.846413  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:39.846385  302662 provision.go:143] copyHostCerts
	I0214 22:00:39.846558  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:00:39.846578  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:00:39.846685  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:00:39.846828  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:00:39.846841  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:00:39.846885  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:00:39.846995  302662 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:00:39.847008  302662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:00:39.847066  302662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:00:39.847177  302662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.flannel-266997 san=[127.0.0.1 192.168.61.227 flannel-266997 localhost minikube]
	I0214 22:00:40.050848  302662 provision.go:177] copyRemoteCerts
	I0214 22:00:40.050928  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:00:40.050984  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.054657  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055071  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.055100  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.055790  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.056179  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.056663  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.056830  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.157340  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:00:40.184601  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0214 22:00:40.210273  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 22:00:40.235456  302662 provision.go:87] duration metric: took 397.323852ms to configureAuth
	I0214 22:00:40.235484  302662 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:00:40.235682  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:00:40.235775  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.238280  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238712  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.238751  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.238935  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.239137  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239310  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.239478  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.239662  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.239824  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.239838  302662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:00:40.477460  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:00:40.477495  302662 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:00:40.477529  302662 main.go:141] libmachine: (flannel-266997) Calling .GetURL
	I0214 22:00:40.478939  302662 main.go:141] libmachine: (flannel-266997) DBG | using libvirt version 6000000
	I0214 22:00:40.481396  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481778  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.481807  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.481953  302662 main.go:141] libmachine: Docker is up and running!
	I0214 22:00:40.481977  302662 main.go:141] libmachine: Reticulating splines...
	I0214 22:00:40.481987  302662 client.go:171] duration metric: took 23.84148991s to LocalClient.Create
	I0214 22:00:40.482019  302662 start.go:167] duration metric: took 23.841568434s to libmachine.API.Create "flannel-266997"
	I0214 22:00:40.482032  302662 start.go:293] postStartSetup for "flannel-266997" (driver="kvm2")
	I0214 22:00:40.482052  302662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:00:40.482086  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.482376  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:00:40.482407  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.484968  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485363  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.485394  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.485554  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.485749  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.485890  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.486025  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.573729  302662 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:00:40.577977  302662 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:00:40.578003  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:00:40.578075  302662 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:00:40.578180  302662 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:00:40.578302  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:00:40.588072  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:40.612075  302662 start.go:296] duration metric: took 130.020062ms for postStartSetup
	I0214 22:00:40.612132  302662 main.go:141] libmachine: (flannel-266997) Calling .GetConfigRaw
	I0214 22:00:40.612708  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.615427  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.615734  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.615764  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.616036  302662 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/config.json ...
	I0214 22:00:40.616256  302662 start.go:128] duration metric: took 23.993767271s to createHost
	I0214 22:00:40.616279  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.618824  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619145  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.619172  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.619365  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.619515  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619667  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.619812  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.619942  302662 main.go:141] libmachine: Using SSH client type: native
	I0214 22:00:40.620120  302662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0214 22:00:40.620135  302662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:00:40.739233  302662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570440.696234424
	
	I0214 22:00:40.739258  302662 fix.go:216] guest clock: 1739570440.696234424
	I0214 22:00:40.739268  302662 fix.go:229] Guest: 2025-02-14 22:00:40.696234424 +0000 UTC Remote: 2025-02-14 22:00:40.616269623 +0000 UTC m=+24.118806419 (delta=79.964801ms)
	I0214 22:00:40.739303  302662 fix.go:200] guest clock delta is within tolerance: 79.964801ms
	I0214 22:00:40.739310  302662 start.go:83] releasing machines lock for "flannel-266997", held for 24.116939765s
	I0214 22:00:40.739341  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.739624  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:40.742553  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.742948  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.742975  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.743235  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743808  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.743985  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:00:40.744102  302662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:00:40.744175  302662 ssh_runner.go:195] Run: cat /version.json
	I0214 22:00:40.744198  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.744177  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:00:40.747113  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747256  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747420  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747485  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747553  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.747704  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.747663  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:40.747759  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:40.747849  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.747915  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:00:40.748050  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.748071  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:00:40.748190  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:00:40.748337  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:00:40.836766  302662 ssh_runner.go:195] Run: systemctl --version
	I0214 22:00:40.864976  302662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:00:41.030697  302662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:00:41.037406  302662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:00:41.037479  302662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:00:41.054755  302662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:00:41.054780  302662 start.go:495] detecting cgroup driver to use...
	I0214 22:00:41.054846  302662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:00:41.070471  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:00:41.085648  302662 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:00:41.085703  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:00:41.101988  302662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:00:41.118492  302662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:00:41.258887  302662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:00:41.416252  302662 docker.go:233] disabling docker service ...
	I0214 22:00:41.416318  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:00:41.433330  302662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:00:41.447924  302662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W0214 22:00:36.876425  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:36.876444  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:36.876460  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:36.954714  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:36.954740  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:39.500037  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:39.520812  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:39.520889  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:39.562216  296043 cri.go:89] found id: ""
	I0214 22:00:39.562250  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.562263  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:39.562271  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:39.562336  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:39.601201  296043 cri.go:89] found id: ""
	I0214 22:00:39.601234  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.601247  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:39.601255  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:39.601315  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:39.640202  296043 cri.go:89] found id: ""
	I0214 22:00:39.640231  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.640242  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:39.640250  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:39.640307  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:39.674932  296043 cri.go:89] found id: ""
	I0214 22:00:39.674960  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.674972  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:39.674981  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:39.675042  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:39.724788  296043 cri.go:89] found id: ""
	I0214 22:00:39.724820  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.724833  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:39.724841  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:39.724908  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:39.771267  296043 cri.go:89] found id: ""
	I0214 22:00:39.771295  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.771306  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:39.771314  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:39.771369  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:39.810824  296043 cri.go:89] found id: ""
	I0214 22:00:39.810852  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.810864  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:39.810871  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:39.810933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:39.852769  296043 cri.go:89] found id: ""
	I0214 22:00:39.852794  296043 logs.go:282] 0 containers: []
	W0214 22:00:39.852803  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:39.852815  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:39.852831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:39.906779  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:39.906808  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:39.924045  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:39.924072  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:40.027558  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:40.027580  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:40.027594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:40.130386  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:40.130415  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:41.665522  302662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:00:41.808101  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:00:41.827287  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:00:41.846475  302662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:00:41.846535  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.858296  302662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:00:41.858365  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.871564  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.892941  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.914718  302662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:00:41.929404  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.943358  302662 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.967621  302662 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:00:41.981572  302662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:00:41.993282  302662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:00:41.993338  302662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:00:42.007298  302662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:00:42.020823  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:42.168987  302662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:00:42.522679  302662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:00:42.522753  302662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:00:42.527926  302662 start.go:563] Will wait 60s for crictl version
	I0214 22:00:42.528000  302662 ssh_runner.go:195] Run: which crictl
	I0214 22:00:42.532262  302662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:00:42.583646  302662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:00:42.583793  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.613308  302662 ssh_runner.go:195] Run: crio --version
	I0214 22:00:42.651554  302662 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:00:40.740919  304371 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0214 22:00:40.741156  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:00:40.741214  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:00:40.758664  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0214 22:00:40.759104  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:00:40.759684  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:00:40.759711  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:00:40.760116  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:00:40.760351  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:00:40.760523  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:00:40.760689  304371 start.go:159] libmachine.API.Create for "bridge-266997" (driver="kvm2")
	I0214 22:00:40.760732  304371 client.go:168] LocalClient.Create starting
	I0214 22:00:40.760769  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem
	I0214 22:00:40.760801  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760820  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760889  304371 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem
	I0214 22:00:40.760925  304371 main.go:141] libmachine: Decoding PEM data...
	I0214 22:00:40.760947  304371 main.go:141] libmachine: Parsing certificate...
	I0214 22:00:40.760973  304371 main.go:141] libmachine: Running pre-create checks...
	I0214 22:00:40.760985  304371 main.go:141] libmachine: (bridge-266997) Calling .PreCreateCheck
	I0214 22:00:40.761428  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:00:40.761930  304371 main.go:141] libmachine: Creating machine...
	I0214 22:00:40.761945  304371 main.go:141] libmachine: (bridge-266997) Calling .Create
	I0214 22:00:40.762102  304371 main.go:141] libmachine: (bridge-266997) creating KVM machine...
	I0214 22:00:40.762121  304371 main.go:141] libmachine: (bridge-266997) creating network...
	I0214 22:00:40.763213  304371 main.go:141] libmachine: (bridge-266997) DBG | found existing default KVM network
	I0214 22:00:40.764445  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.764318  304393 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:fa:84} reservation:<nil>}
	I0214 22:00:40.765726  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.765653  304393 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000266bc0}
	I0214 22:00:40.765754  304371 main.go:141] libmachine: (bridge-266997) DBG | created network xml: 
	I0214 22:00:40.765764  304371 main.go:141] libmachine: (bridge-266997) DBG | <network>
	I0214 22:00:40.765774  304371 main.go:141] libmachine: (bridge-266997) DBG |   <name>mk-bridge-266997</name>
	I0214 22:00:40.765780  304371 main.go:141] libmachine: (bridge-266997) DBG |   <dns enable='no'/>
	I0214 22:00:40.765786  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765794  304371 main.go:141] libmachine: (bridge-266997) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0214 22:00:40.765810  304371 main.go:141] libmachine: (bridge-266997) DBG |     <dhcp>
	I0214 22:00:40.765819  304371 main.go:141] libmachine: (bridge-266997) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0214 22:00:40.765830  304371 main.go:141] libmachine: (bridge-266997) DBG |     </dhcp>
	I0214 22:00:40.765836  304371 main.go:141] libmachine: (bridge-266997) DBG |   </ip>
	I0214 22:00:40.765843  304371 main.go:141] libmachine: (bridge-266997) DBG |   
	I0214 22:00:40.765848  304371 main.go:141] libmachine: (bridge-266997) DBG | </network>
	I0214 22:00:40.765856  304371 main.go:141] libmachine: (bridge-266997) DBG | 
	I0214 22:00:40.770689  304371 main.go:141] libmachine: (bridge-266997) DBG | trying to create private KVM network mk-bridge-266997 192.168.50.0/24...
	I0214 22:00:40.854522  304371 main.go:141] libmachine: (bridge-266997) DBG | private KVM network mk-bridge-266997 192.168.50.0/24 created
	I0214 22:00:40.854555  304371 main.go:141] libmachine: (bridge-266997) setting up store path in /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:40.854570  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:40.854493  304393 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:40.854582  304371 main.go:141] libmachine: (bridge-266997) building disk image from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 22:00:40.854672  304371 main.go:141] libmachine: (bridge-266997) Downloading /home/jenkins/minikube-integration/20315-243456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0214 22:00:41.215883  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.215729  304393 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa...
	I0214 22:00:41.309617  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309464  304393 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk...
	I0214 22:00:41.309654  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing magic tar header
	I0214 22:00:41.309668  304371 main.go:141] libmachine: (bridge-266997) DBG | Writing SSH key tar header
	I0214 22:00:41.309681  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.309616  304393 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 ...
	I0214 22:00:41.309770  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997
	I0214 22:00:41.309791  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube/machines
	I0214 22:00:41.309807  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 22:00:41.309822  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20315-243456
	I0214 22:00:41.309835  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997 (perms=drwx------)
	I0214 22:00:41.309848  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube/machines (perms=drwxr-xr-x)
	I0214 22:00:41.309858  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456/.minikube (perms=drwxr-xr-x)
	I0214 22:00:41.309871  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0214 22:00:41.309884  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration/20315-243456 (perms=drwxrwxr-x)
	I0214 22:00:41.309910  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0214 22:00:41.309927  304371 main.go:141] libmachine: (bridge-266997) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0214 22:00:41.309938  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home/jenkins
	I0214 22:00:41.309949  304371 main.go:141] libmachine: (bridge-266997) DBG | checking permissions on dir: /home
	I0214 22:00:41.309959  304371 main.go:141] libmachine: (bridge-266997) DBG | skipping /home - not owner
	I0214 22:00:41.309969  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.311296  304371 main.go:141] libmachine: (bridge-266997) define libvirt domain using xml: 
	I0214 22:00:41.311319  304371 main.go:141] libmachine: (bridge-266997) <domain type='kvm'>
	I0214 22:00:41.311329  304371 main.go:141] libmachine: (bridge-266997)   <name>bridge-266997</name>
	I0214 22:00:41.311357  304371 main.go:141] libmachine: (bridge-266997)   <memory unit='MiB'>3072</memory>
	I0214 22:00:41.311407  304371 main.go:141] libmachine: (bridge-266997)   <vcpu>2</vcpu>
	I0214 22:00:41.311453  304371 main.go:141] libmachine: (bridge-266997)   <features>
	I0214 22:00:41.311464  304371 main.go:141] libmachine: (bridge-266997)     <acpi/>
	I0214 22:00:41.311473  304371 main.go:141] libmachine: (bridge-266997)     <apic/>
	I0214 22:00:41.311482  304371 main.go:141] libmachine: (bridge-266997)     <pae/>
	I0214 22:00:41.311492  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311501  304371 main.go:141] libmachine: (bridge-266997)   </features>
	I0214 22:00:41.311522  304371 main.go:141] libmachine: (bridge-266997)   <cpu mode='host-passthrough'>
	I0214 22:00:41.311533  304371 main.go:141] libmachine: (bridge-266997)   
	I0214 22:00:41.311543  304371 main.go:141] libmachine: (bridge-266997)   </cpu>
	I0214 22:00:41.311556  304371 main.go:141] libmachine: (bridge-266997)   <os>
	I0214 22:00:41.311566  304371 main.go:141] libmachine: (bridge-266997)     <type>hvm</type>
	I0214 22:00:41.311575  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='cdrom'/>
	I0214 22:00:41.311585  304371 main.go:141] libmachine: (bridge-266997)     <boot dev='hd'/>
	I0214 22:00:41.311597  304371 main.go:141] libmachine: (bridge-266997)     <bootmenu enable='no'/>
	I0214 22:00:41.311604  304371 main.go:141] libmachine: (bridge-266997)   </os>
	I0214 22:00:41.311615  304371 main.go:141] libmachine: (bridge-266997)   <devices>
	I0214 22:00:41.311623  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='cdrom'>
	I0214 22:00:41.311640  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/boot2docker.iso'/>
	I0214 22:00:41.311651  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hdc' bus='scsi'/>
	I0214 22:00:41.311659  304371 main.go:141] libmachine: (bridge-266997)       <readonly/>
	I0214 22:00:41.311669  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311679  304371 main.go:141] libmachine: (bridge-266997)     <disk type='file' device='disk'>
	I0214 22:00:41.311691  304371 main.go:141] libmachine: (bridge-266997)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0214 22:00:41.311708  304371 main.go:141] libmachine: (bridge-266997)       <source file='/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/bridge-266997.rawdisk'/>
	I0214 22:00:41.311719  304371 main.go:141] libmachine: (bridge-266997)       <target dev='hda' bus='virtio'/>
	I0214 22:00:41.311731  304371 main.go:141] libmachine: (bridge-266997)     </disk>
	I0214 22:00:41.311745  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311758  304371 main.go:141] libmachine: (bridge-266997)       <source network='mk-bridge-266997'/>
	I0214 22:00:41.311768  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311784  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311795  304371 main.go:141] libmachine: (bridge-266997)     <interface type='network'>
	I0214 22:00:41.311806  304371 main.go:141] libmachine: (bridge-266997)       <source network='default'/>
	I0214 22:00:41.311816  304371 main.go:141] libmachine: (bridge-266997)       <model type='virtio'/>
	I0214 22:00:41.311835  304371 main.go:141] libmachine: (bridge-266997)     </interface>
	I0214 22:00:41.311845  304371 main.go:141] libmachine: (bridge-266997)     <serial type='pty'>
	I0214 22:00:41.311854  304371 main.go:141] libmachine: (bridge-266997)       <target port='0'/>
	I0214 22:00:41.311863  304371 main.go:141] libmachine: (bridge-266997)     </serial>
	I0214 22:00:41.311871  304371 main.go:141] libmachine: (bridge-266997)     <console type='pty'>
	I0214 22:00:41.311882  304371 main.go:141] libmachine: (bridge-266997)       <target type='serial' port='0'/>
	I0214 22:00:41.311894  304371 main.go:141] libmachine: (bridge-266997)     </console>
	I0214 22:00:41.311904  304371 main.go:141] libmachine: (bridge-266997)     <rng model='virtio'>
	I0214 22:00:41.311913  304371 main.go:141] libmachine: (bridge-266997)       <backend model='random'>/dev/random</backend>
	I0214 22:00:41.311922  304371 main.go:141] libmachine: (bridge-266997)     </rng>
	I0214 22:00:41.311929  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311935  304371 main.go:141] libmachine: (bridge-266997)     
	I0214 22:00:41.311943  304371 main.go:141] libmachine: (bridge-266997)   </devices>
	I0214 22:00:41.311953  304371 main.go:141] libmachine: (bridge-266997) </domain>
	I0214 22:00:41.311963  304371 main.go:141] libmachine: (bridge-266997) 
	I0214 22:00:41.316746  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:64:b9:e2 in network default
	I0214 22:00:41.317498  304371 main.go:141] libmachine: (bridge-266997) starting domain...
	I0214 22:00:41.317522  304371 main.go:141] libmachine: (bridge-266997) ensuring networks are active...
	I0214 22:00:41.317534  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.318252  304371 main.go:141] libmachine: (bridge-266997) Ensuring network default is active
	I0214 22:00:41.318659  304371 main.go:141] libmachine: (bridge-266997) Ensuring network mk-bridge-266997 is active
	I0214 22:00:41.319251  304371 main.go:141] libmachine: (bridge-266997) getting domain XML...
	I0214 22:00:41.320056  304371 main.go:141] libmachine: (bridge-266997) creating domain...
	I0214 22:00:41.741479  304371 main.go:141] libmachine: (bridge-266997) waiting for IP...
	I0214 22:00:41.742488  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:41.743161  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:41.743281  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:41.743162  304393 retry.go:31] will retry after 281.296096ms: waiting for domain to come up
	I0214 22:00:42.026644  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.027336  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.027373  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.027305  304393 retry.go:31] will retry after 320.245979ms: waiting for domain to come up
	I0214 22:00:42.348610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.349147  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.349189  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.349091  304393 retry.go:31] will retry after 386.466755ms: waiting for domain to come up
	I0214 22:00:42.737580  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:42.738183  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:42.738213  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:42.738129  304393 retry.go:31] will retry after 559.616616ms: waiting for domain to come up
	I0214 22:00:43.299023  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:43.299572  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:43.299604  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:43.299538  304393 retry.go:31] will retry after 737.634158ms: waiting for domain to come up
	I0214 22:00:44.038490  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.039152  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.039187  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.039125  304393 retry.go:31] will retry after 770.231832ms: waiting for domain to come up
	I0214 22:00:44.811167  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:44.811701  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:44.811735  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:44.811676  304393 retry.go:31] will retry after 1.145451756s: waiting for domain to come up
	I0214 22:00:42.652620  302662 main.go:141] libmachine: (flannel-266997) Calling .GetIP
	I0214 22:00:42.655747  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656123  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:00:42.656157  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:00:42.656409  302662 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0214 22:00:42.660943  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:42.675829  302662 kubeadm.go:875] updating cluster {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:00:42.675939  302662 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:00:42.676015  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:42.716871  302662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:00:42.716942  302662 ssh_runner.go:195] Run: which lz4
	I0214 22:00:42.721755  302662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:00:42.726679  302662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:00:42.726706  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:00:44.256067  302662 crio.go:462] duration metric: took 1.53433582s to copy over tarball
	I0214 22:00:44.256172  302662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:00:42.679860  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:42.699140  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:42.699212  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:42.744951  296043 cri.go:89] found id: ""
	I0214 22:00:42.744980  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.744992  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:42.745002  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:42.745061  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:42.795928  296043 cri.go:89] found id: ""
	I0214 22:00:42.795960  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.795973  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:42.795981  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:42.796051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:42.850295  296043 cri.go:89] found id: ""
	I0214 22:00:42.850330  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.850344  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:42.850354  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:42.850427  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:42.913832  296043 cri.go:89] found id: ""
	I0214 22:00:42.913862  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.913874  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:42.913884  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:42.913947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:42.983499  296043 cri.go:89] found id: ""
	I0214 22:00:42.983589  296043 logs.go:282] 0 containers: []
	W0214 22:00:42.983607  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:42.983615  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:42.983689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:43.037301  296043 cri.go:89] found id: ""
	I0214 22:00:43.037331  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.037343  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:43.037351  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:43.037419  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:43.084109  296043 cri.go:89] found id: ""
	I0214 22:00:43.084141  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.084153  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:43.084161  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:43.084233  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:43.139429  296043 cri.go:89] found id: ""
	I0214 22:00:43.139460  296043 logs.go:282] 0 containers: []
	W0214 22:00:43.139473  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:43.139486  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:43.139503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:43.203986  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:43.204033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:43.221265  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:43.221297  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:43.326457  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:43.326485  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:43.326510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:43.450012  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:43.450053  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.020884  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:46.036692  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:46.036773  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:46.078455  296043 cri.go:89] found id: ""
	I0214 22:00:46.078496  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.078510  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:46.078521  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:46.078599  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:46.126385  296043 cri.go:89] found id: ""
	I0214 22:00:46.126418  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.126430  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:46.126438  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:46.126505  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:46.174790  296043 cri.go:89] found id: ""
	I0214 22:00:46.174823  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.174836  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:46.174844  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:46.174911  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:46.236219  296043 cri.go:89] found id: ""
	I0214 22:00:46.236264  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.236276  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:46.236284  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:46.236349  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:46.279991  296043 cri.go:89] found id: ""
	I0214 22:00:46.280019  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.280031  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:46.280038  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:46.280112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:46.316834  296043 cri.go:89] found id: ""
	I0214 22:00:46.316866  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.316878  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:46.316887  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:46.316951  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:46.355156  296043 cri.go:89] found id: ""
	I0214 22:00:46.355183  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.355192  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:46.355198  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:46.355252  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:46.400157  296043 cri.go:89] found id: ""
	I0214 22:00:46.400184  296043 logs.go:282] 0 containers: []
	W0214 22:00:46.400193  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:46.400204  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:46.400220  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:46.451755  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:46.451791  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:46.527757  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:46.527804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:46.544748  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:46.544789  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:46.629059  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:46.629085  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:46.629101  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:45.959707  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:45.960207  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:45.960270  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:45.960194  304393 retry.go:31] will retry after 1.00130128s: waiting for domain to come up
	I0214 22:00:46.962593  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:46.963008  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:46.963041  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:46.962955  304393 retry.go:31] will retry after 1.285042496s: waiting for domain to come up
	I0214 22:00:48.250543  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:48.250935  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:48.250965  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:48.250905  304393 retry.go:31] will retry after 1.446388395s: waiting for domain to come up
	I0214 22:00:49.698809  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:49.699471  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:49.699494  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:49.699386  304393 retry.go:31] will retry after 1.758522672s: waiting for domain to come up
	I0214 22:00:46.623241  302662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.367029567s)
	I0214 22:00:46.623279  302662 crio.go:469] duration metric: took 2.367170567s to extract the tarball
	I0214 22:00:46.623290  302662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:00:46.677690  302662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:00:46.722617  302662 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:00:46.722657  302662 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:00:46.722670  302662 kubeadm.go:926] updating node { 192.168.61.227 8443 v1.32.1 crio true true} ...
	I0214 22:00:46.722822  302662 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0214 22:00:46.722916  302662 ssh_runner.go:195] Run: crio config
	I0214 22:00:46.772485  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:46.772512  302662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:00:46.772537  302662 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-266997 NodeName:flannel-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:00:46.772661  302662 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.227"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:00:46.772737  302662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:00:46.784220  302662 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:00:46.784289  302662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:00:46.795155  302662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0214 22:00:46.811382  302662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:00:46.827059  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0214 22:00:46.843173  302662 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0214 22:00:46.846933  302662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:00:46.859321  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:00:46.987406  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:00:47.004349  302662 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997 for IP: 192.168.61.227
	I0214 22:00:47.004372  302662 certs.go:194] generating shared ca certs ...
	I0214 22:00:47.004394  302662 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.004581  302662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:00:47.004694  302662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:00:47.004720  302662 certs.go:256] generating profile certs ...
	I0214 22:00:47.004800  302662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key
	I0214 22:00:47.004820  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt with IP's: []
	I0214 22:00:47.107488  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt ...
	I0214 22:00:47.107515  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.crt: {Name:mkcafc2c347155a87934cc2b1a02a2ae438963f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107679  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key ...
	I0214 22:00:47.107689  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/client.key: {Name:mk4272dd225f468d379f0edd78b2d669ffde6d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.107784  302662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247
	I0214 22:00:47.107805  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0214 22:00:47.253098  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 ...
	I0214 22:00:47.253126  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247: {Name:mk1eb945c33215ba17bdc46ffcf8840c7f3dd723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253276  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 ...
	I0214 22:00:47.253288  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247: {Name:mkaaf59e6a445fe3bbdd6b7d0c2fa8bb8ab97969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.253362  302662 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt
	I0214 22:00:47.253431  302662 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key.0e4fd247 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key
	I0214 22:00:47.253483  302662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key
	I0214 22:00:47.253498  302662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt with IP's: []
	I0214 22:00:47.423779  302662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt ...
	I0214 22:00:47.423813  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt: {Name:mk6b216b0369b6fec0e56e8e85f07a87b56291e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.423984  302662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key ...
	I0214 22:00:47.423997  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key: {Name:mk7e5c6c7d7c32823cb9d28b264f6cfeaebe6642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:00:47.424190  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:00:47.424232  302662 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:00:47.424244  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:00:47.424269  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:00:47.424295  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:00:47.424323  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:00:47.424371  302662 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:00:47.425017  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:00:47.450688  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:00:47.475301  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:00:47.506864  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:00:47.535303  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:00:47.558848  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:00:47.582259  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:00:47.605880  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/flannel-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 22:00:47.629346  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:00:47.655313  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:00:47.684140  302662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:00:47.711649  302662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:00:47.728204  302662 ssh_runner.go:195] Run: openssl version
	I0214 22:00:47.734993  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:00:47.745552  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.749952  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.750009  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:00:47.755881  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:00:47.766140  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:00:47.776438  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781213  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.781254  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:00:47.788489  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:00:47.799309  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:00:47.809509  302662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.813957  302662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.814001  302662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:00:47.819446  302662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:00:47.829331  302662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:00:47.833329  302662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:00:47.833389  302662 kubeadm.go:392] StartCluster: {Name:flannel-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-266997 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:00:47.833488  302662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:00:47.833542  302662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:00:47.872065  302662 cri.go:89] found id: ""
	I0214 22:00:47.872175  302662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:00:47.886707  302662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:00:47.897518  302662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:00:47.906407  302662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:00:47.906422  302662 kubeadm.go:157] found existing configuration files:
	
	I0214 22:00:47.906468  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:00:47.917119  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:00:47.917169  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:00:47.927075  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:00:47.936360  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:00:47.936401  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:00:47.946326  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.958232  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:00:47.958271  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:00:47.970063  302662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:00:47.983821  302662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:00:47.983884  302662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:00:47.993655  302662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:00:48.149190  302662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:00:49.216868  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:49.235561  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:49.235639  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:49.291785  296043 cri.go:89] found id: ""
	I0214 22:00:49.291817  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.291830  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:49.291840  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:49.291901  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:49.340347  296043 cri.go:89] found id: ""
	I0214 22:00:49.340374  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.340385  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:49.340393  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:49.340446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:49.386999  296043 cri.go:89] found id: ""
	I0214 22:00:49.387030  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.387041  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:49.387048  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:49.387114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:49.433819  296043 cri.go:89] found id: ""
	I0214 22:00:49.433849  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.433861  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:49.433868  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:49.433930  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:49.477406  296043 cri.go:89] found id: ""
	I0214 22:00:49.477453  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.477467  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:49.477478  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:49.477560  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:49.522581  296043 cri.go:89] found id: ""
	I0214 22:00:49.522618  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.522648  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:49.522657  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:49.522721  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:49.560370  296043 cri.go:89] found id: ""
	I0214 22:00:49.560399  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.560410  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:49.560418  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:49.560479  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:49.600705  296043 cri.go:89] found id: ""
	I0214 22:00:49.600738  296043 logs.go:282] 0 containers: []
	W0214 22:00:49.600751  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:49.600765  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:49.600787  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:49.692921  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:49.693003  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:49.715093  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:49.715190  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:49.819499  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:49.819529  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:49.819546  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:49.955944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:49.955994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:51.459674  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:51.460265  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:51.460299  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:51.460228  304393 retry.go:31] will retry after 2.818661449s: waiting for domain to come up
	I0214 22:00:54.281066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:54.281541  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:54.281618  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:54.281543  304393 retry.go:31] will retry after 3.13231059s: waiting for domain to come up
	I0214 22:00:52.528580  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:52.545309  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:52.545394  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:52.587415  296043 cri.go:89] found id: ""
	I0214 22:00:52.587446  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.587458  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:52.587466  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:52.587534  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:52.647538  296043 cri.go:89] found id: ""
	I0214 22:00:52.647649  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.647668  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:52.647677  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:52.647749  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:52.700570  296043 cri.go:89] found id: ""
	I0214 22:00:52.700603  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.700615  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:52.700624  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:52.700687  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:52.740732  296043 cri.go:89] found id: ""
	I0214 22:00:52.740764  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.740775  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:52.740782  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:52.740846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:52.781456  296043 cri.go:89] found id: ""
	I0214 22:00:52.781491  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.781503  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:52.781512  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:52.781581  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:52.829342  296043 cri.go:89] found id: ""
	I0214 22:00:52.829380  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.829392  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:52.829400  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:52.829471  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:52.879000  296043 cri.go:89] found id: ""
	I0214 22:00:52.879033  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.879045  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:52.879053  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:52.879127  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:52.923620  296043 cri.go:89] found id: ""
	I0214 22:00:52.923667  296043 logs.go:282] 0 containers: []
	W0214 22:00:52.923680  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:52.923698  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:52.923717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:53.052613  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:53.052665  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:53.105757  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:53.105848  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:53.188362  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:53.188408  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:53.210408  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:53.210462  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:53.308816  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:55.810467  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:55.825649  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:55.825701  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:55.861736  296043 cri.go:89] found id: ""
	I0214 22:00:55.861759  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.861769  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:55.861776  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:55.861826  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:55.903282  296043 cri.go:89] found id: ""
	I0214 22:00:55.903318  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.903330  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:55.903352  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:55.903423  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:55.948890  296043 cri.go:89] found id: ""
	I0214 22:00:55.948919  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.948930  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:55.948937  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:55.948992  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:55.994279  296043 cri.go:89] found id: ""
	I0214 22:00:55.994307  296043 logs.go:282] 0 containers: []
	W0214 22:00:55.994316  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:55.994321  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:55.994376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:56.039497  296043 cri.go:89] found id: ""
	I0214 22:00:56.039539  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.039551  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:56.039563  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:56.039630  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:56.079255  296043 cri.go:89] found id: ""
	I0214 22:00:56.079284  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.079294  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:56.079303  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:56.079367  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:56.121581  296043 cri.go:89] found id: ""
	I0214 22:00:56.121610  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.121622  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:56.121630  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:56.121689  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:56.175042  296043 cri.go:89] found id: ""
	I0214 22:00:56.175066  296043 logs.go:282] 0 containers: []
	W0214 22:00:56.175076  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:56.175089  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:56.175103  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:56.229769  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:56.229804  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:00:56.243975  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:56.244001  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:56.319958  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:56.319982  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:56.319996  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:56.406004  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:56.406031  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:58.451548  302662 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:00:58.451629  302662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:00:58.451729  302662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:00:58.451841  302662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:00:58.451943  302662 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:00:58.452016  302662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:00:58.453381  302662 out.go:235]   - Generating certificates and keys ...
	I0214 22:00:58.453484  302662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:00:58.453567  302662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:00:58.453655  302662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:00:58.453731  302662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:00:58.453819  302662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:00:58.453888  302662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:00:58.453955  302662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:00:58.454117  302662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454193  302662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:00:58.454361  302662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-266997 localhost] and IPs [192.168.61.227 127.0.0.1 ::1]
	I0214 22:00:58.454457  302662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:00:58.454548  302662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:00:58.454610  302662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:00:58.454703  302662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:00:58.454782  302662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:00:58.454863  302662 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:00:58.454943  302662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:00:58.455064  302662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:00:58.455162  302662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:00:58.455295  302662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:00:58.455393  302662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:00:58.457252  302662 out.go:235]   - Booting up control plane ...
	I0214 22:00:58.457378  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:00:58.457451  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:00:58.457518  302662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:00:58.457610  302662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:00:58.457721  302662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:00:58.457788  302662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:00:58.457914  302662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:00:58.458088  302662 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:00:58.458149  302662 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.319865ms
	I0214 22:00:58.458214  302662 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:00:58.458290  302662 kubeadm.go:310] [api-check] The API server is healthy after 5.001402391s
	I0214 22:00:58.458460  302662 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:00:58.458610  302662 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:00:58.458708  302662 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:00:58.458905  302662 kubeadm.go:310] [mark-control-plane] Marking the node flannel-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:00:58.458986  302662 kubeadm.go:310] [bootstrap-token] Using token: i1fz0a.mthozpfw6j726kwk
	I0214 22:00:58.460106  302662 out.go:235]   - Configuring RBAC rules ...
	I0214 22:00:58.460212  302662 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:00:58.460327  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:00:58.460501  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:00:58.460640  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:00:58.460789  302662 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:00:58.460862  302662 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:00:58.460961  302662 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:00:58.460999  302662 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:00:58.461050  302662 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:00:58.461063  302662 kubeadm.go:310] 
	I0214 22:00:58.461122  302662 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:00:58.461128  302662 kubeadm.go:310] 
	I0214 22:00:58.461201  302662 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:00:58.461207  302662 kubeadm.go:310] 
	I0214 22:00:58.461228  302662 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:00:58.461309  302662 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:00:58.461378  302662 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:00:58.461386  302662 kubeadm.go:310] 
	I0214 22:00:58.461462  302662 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:00:58.461473  302662 kubeadm.go:310] 
	I0214 22:00:58.461518  302662 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:00:58.461525  302662 kubeadm.go:310] 
	I0214 22:00:58.461568  302662 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:00:58.461647  302662 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:00:58.461725  302662 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:00:58.461733  302662 kubeadm.go:310] 
	I0214 22:00:58.461811  302662 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:00:58.461891  302662 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:00:58.461898  302662 kubeadm.go:310] 
	I0214 22:00:58.462022  302662 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462119  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:00:58.462141  302662 kubeadm.go:310] 	--control-plane 
	I0214 22:00:58.462144  302662 kubeadm.go:310] 
	I0214 22:00:58.462225  302662 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:00:58.462241  302662 kubeadm.go:310] 
	I0214 22:00:58.462339  302662 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i1fz0a.mthozpfw6j726kwk \
	I0214 22:00:58.462459  302662 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:00:58.462474  302662 cni.go:84] Creating CNI manager for "flannel"
	I0214 22:00:58.463742  302662 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0214 22:00:57.415007  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:00:57.415501  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find current IP address of domain bridge-266997 in network mk-bridge-266997
	I0214 22:00:57.415568  304371 main.go:141] libmachine: (bridge-266997) DBG | I0214 22:00:57.415492  304393 retry.go:31] will retry after 5.136891997s: waiting for domain to come up
	I0214 22:00:58.464845  302662 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0214 22:00:58.471373  302662 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0214 22:00:58.471395  302662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4348 bytes)
	I0214 22:00:58.493635  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0214 22:00:59.054047  302662 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:00:59.054126  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.054208  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-266997 minikube.k8s.io/updated_at=2025_02_14T22_00_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=flannel-266997 minikube.k8s.io/primary=true
	I0214 22:00:59.094360  302662 ops.go:34] apiserver oom_adj: -16
	I0214 22:00:59.226069  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:59.727014  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.226853  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:00.726232  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.226169  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:00:58.959819  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:00:58.975738  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:00:58.975799  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:00:59.016692  296043 cri.go:89] found id: ""
	I0214 22:00:59.016722  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.016734  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:00:59.016742  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:00:59.016794  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:00:59.056462  296043 cri.go:89] found id: ""
	I0214 22:00:59.056486  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.056495  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:00:59.056504  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:00:59.056554  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:00:59.102865  296043 cri.go:89] found id: ""
	I0214 22:00:59.102893  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.102904  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:00:59.102911  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:00:59.102977  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:00:59.139163  296043 cri.go:89] found id: ""
	I0214 22:00:59.139189  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.139199  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:00:59.139204  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:00:59.139256  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:00:59.184113  296043 cri.go:89] found id: ""
	I0214 22:00:59.184142  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.184153  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:00:59.184160  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:00:59.184226  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:00:59.231073  296043 cri.go:89] found id: ""
	I0214 22:00:59.231104  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.231113  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:00:59.231123  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:00:59.231304  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:00:59.284699  296043 cri.go:89] found id: ""
	I0214 22:00:59.284723  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.284733  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:00:59.284741  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:00:59.284793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:00:59.337079  296043 cri.go:89] found id: ""
	I0214 22:00:59.337100  296043 logs.go:282] 0 containers: []
	W0214 22:00:59.337107  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:00:59.337116  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:00:59.337133  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:00:59.410337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:00:59.410365  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:00:59.410380  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:00:59.492678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:00:59.492710  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:00:59.535993  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:00:59.536022  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:00:59.596863  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:00:59.596889  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:01.726818  302662 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:01.829407  302662 kubeadm.go:1105] duration metric: took 2.775341982s to wait for elevateKubeSystemPrivileges
	I0214 22:01:01.829439  302662 kubeadm.go:394] duration metric: took 13.996054167s to StartCluster
	I0214 22:01:01.829456  302662 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.829525  302662 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:01.831145  302662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:01.831377  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:01.831394  302662 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:01.831459  302662 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:01.831554  302662 addons.go:69] Setting storage-provisioner=true in profile "flannel-266997"
	I0214 22:01:01.831572  302662 addons.go:238] Setting addon storage-provisioner=true in "flannel-266997"
	I0214 22:01:01.831603  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.831596  302662 addons.go:69] Setting default-storageclass=true in profile "flannel-266997"
	I0214 22:01:01.831628  302662 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-266997"
	I0214 22:01:01.831660  302662 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:01.832023  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832059  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832025  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.832148  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.832802  302662 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:01.833905  302662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:01.852906  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0214 22:01:01.853018  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I0214 22:01:01.853380  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853592  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.853990  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854005  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854121  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.854144  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.854347  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854575  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.854851  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.854853  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.854886  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.858344  302662 addons.go:238] Setting addon default-storageclass=true in "flannel-266997"
	I0214 22:01:01.858420  302662 host.go:66] Checking if "flannel-266997" exists ...
	I0214 22:01:01.858836  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.858889  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.870725  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0214 22:01:01.871213  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.871699  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.871721  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.872069  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.872261  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.873845  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.875386  302662 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:01.876555  302662 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:01.876577  302662 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:01.876594  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.879497  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.879905  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.879931  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.880082  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.880247  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0214 22:01:01.880408  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.880539  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.880643  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:01.880960  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.881434  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.881453  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.881864  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.882412  302662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:01.882463  302662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:01.898239  302662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0214 22:01:01.898679  302662 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:01.899246  302662 main.go:141] libmachine: Using API Version  1
	I0214 22:01:01.899268  302662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:01.899656  302662 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:01.899837  302662 main.go:141] libmachine: (flannel-266997) Calling .GetState
	I0214 22:01:01.901209  302662 main.go:141] libmachine: (flannel-266997) Calling .DriverName
	I0214 22:01:01.901385  302662 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:01.901402  302662 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:01.901419  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHHostname
	I0214 22:01:01.903666  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.903938  302662 main.go:141] libmachine: (flannel-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:24:91", ip: ""} in network mk-flannel-266997: {Iface:virbr3 ExpiryTime:2025-02-14 23:00:31 +0000 UTC Type:0 Mac:52:54:00:ee:24:91 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:flannel-266997 Clientid:01:52:54:00:ee:24:91}
	I0214 22:01:01.904002  302662 main.go:141] libmachine: (flannel-266997) DBG | domain flannel-266997 has defined IP address 192.168.61.227 and MAC address 52:54:00:ee:24:91 in network mk-flannel-266997
	I0214 22:01:01.904165  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHPort
	I0214 22:01:01.904327  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHKeyPath
	I0214 22:01:01.904465  302662 main.go:141] libmachine: (flannel-266997) Calling .GetSSHUsername
	I0214 22:01:01.904593  302662 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/flannel-266997/id_rsa Username:docker}
	I0214 22:01:02.010213  302662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:02.068737  302662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:02.254658  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:02.280477  302662 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:02.558819  302662 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:02.560262  302662 node_ready.go:35] waiting up to 15m0s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:03.001707  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001737  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.001737  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.001748  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002000  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002015  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002024  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002031  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002103  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002117  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.002126  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.002133  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.002253  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.002271  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.004236  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.004250  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.004267  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.012492  302662 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:03.012514  302662 main.go:141] libmachine: (flannel-266997) Calling .Close
	I0214 22:01:03.012788  302662 main.go:141] libmachine: (flannel-266997) DBG | Closing plugin on server side
	I0214 22:01:03.012805  302662 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:03.012820  302662 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:03.014783  302662 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:02.553773  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554344  304371 main.go:141] libmachine: (bridge-266997) found domain IP: 192.168.50.81
	I0214 22:01:02.554373  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has current primary IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.554391  304371 main.go:141] libmachine: (bridge-266997) reserving static IP address...
	I0214 22:01:02.554641  304371 main.go:141] libmachine: (bridge-266997) DBG | unable to find host DHCP lease matching {name: "bridge-266997", mac: "52:54:00:b2:15:b0", ip: "192.168.50.81"} in network mk-bridge-266997
	I0214 22:01:02.642992  304371 main.go:141] libmachine: (bridge-266997) DBG | Getting to WaitForSSH function...
	I0214 22:01:02.643034  304371 main.go:141] libmachine: (bridge-266997) reserved static IP address 192.168.50.81 for domain bridge-266997
	I0214 22:01:02.643044  304371 main.go:141] libmachine: (bridge-266997) waiting for SSH...
	I0214 22:01:02.646143  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646598  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.646647  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.646923  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH client type: external
	I0214 22:01:02.646961  304371 main.go:141] libmachine: (bridge-266997) DBG | Using SSH private key: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa (-rw-------)
	I0214 22:01:02.647011  304371 main.go:141] libmachine: (bridge-266997) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0214 22:01:02.647024  304371 main.go:141] libmachine: (bridge-266997) DBG | About to run SSH command:
	I0214 22:01:02.647035  304371 main.go:141] libmachine: (bridge-266997) DBG | exit 0
	I0214 22:01:02.788308  304371 main.go:141] libmachine: (bridge-266997) DBG | SSH cmd err, output: <nil>: 
	I0214 22:01:02.788649  304371 main.go:141] libmachine: (bridge-266997) KVM machine creation complete
	I0214 22:01:02.789044  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:02.789606  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789750  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:02.789927  304371 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0214 22:01:02.789946  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:02.791392  304371 main.go:141] libmachine: Detecting operating system of created instance...
	I0214 22:01:02.791405  304371 main.go:141] libmachine: Waiting for SSH to be available...
	I0214 22:01:02.791410  304371 main.go:141] libmachine: Getting to WaitForSSH function...
	I0214 22:01:02.791416  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.793977  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794285  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.794302  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.794418  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.794553  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794709  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.794828  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.794971  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.795189  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.795201  304371 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0214 22:01:02.909895  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:02.909920  304371 main.go:141] libmachine: Detecting the provisioner...
	I0214 22:01:02.909929  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:02.912696  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913040  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:02.913066  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:02.913200  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:02.913439  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913647  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:02.913796  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:02.913932  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:02.914103  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:02.914113  304371 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0214 22:01:03.028655  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0214 22:01:03.028744  304371 main.go:141] libmachine: found compatible host: buildroot
	I0214 22:01:03.028760  304371 main.go:141] libmachine: Provisioning with buildroot...
	I0214 22:01:03.028776  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029006  304371 buildroot.go:166] provisioning hostname "bridge-266997"
	I0214 22:01:03.029030  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.029238  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.032183  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032556  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.032589  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.032715  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.032907  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033059  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.033225  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.033391  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.033602  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.033619  304371 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-266997 && echo "bridge-266997" | sudo tee /etc/hostname
	I0214 22:01:03.166933  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-266997
	
	I0214 22:01:03.166960  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.169777  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170149  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.170173  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.170404  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.170597  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.170926  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.171070  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.171304  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.171325  304371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-266997' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-266997/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-266997' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 22:01:03.303955  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 22:01:03.303990  304371 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20315-243456/.minikube CaCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20315-243456/.minikube}
	I0214 22:01:03.304021  304371 buildroot.go:174] setting up certificates
	I0214 22:01:03.304040  304371 provision.go:84] configureAuth start
	I0214 22:01:03.304054  304371 main.go:141] libmachine: (bridge-266997) Calling .GetMachineName
	I0214 22:01:03.304376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:03.307438  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.307857  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.307885  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.308035  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.310496  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.310856  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.310903  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.311001  304371 provision.go:143] copyHostCerts
	I0214 22:01:03.311081  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem, removing ...
	I0214 22:01:03.311103  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem
	I0214 22:01:03.311172  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/ca.pem (1082 bytes)
	I0214 22:01:03.311315  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem, removing ...
	I0214 22:01:03.311336  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem
	I0214 22:01:03.311374  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/cert.pem (1123 bytes)
	I0214 22:01:03.311492  304371 exec_runner.go:144] found /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem, removing ...
	I0214 22:01:03.311506  304371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem
	I0214 22:01:03.311538  304371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20315-243456/.minikube/key.pem (1675 bytes)
	I0214 22:01:03.311643  304371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem org=jenkins.bridge-266997 san=[127.0.0.1 192.168.50.81 bridge-266997 localhost minikube]
	I0214 22:01:03.424494  304371 provision.go:177] copyRemoteCerts
	I0214 22:01:03.424546  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 22:01:03.424572  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.426781  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427138  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.427178  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.427331  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.427484  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.427596  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.427715  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.517135  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 22:01:03.547506  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0214 22:01:03.579546  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0214 22:01:03.608150  304371 provision.go:87] duration metric: took 304.098585ms to configureAuth
	I0214 22:01:03.608174  304371 buildroot.go:189] setting minikube options for container-runtime
	I0214 22:01:03.608327  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:03.608399  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.610851  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611181  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.611213  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.611355  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.611503  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611641  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.611754  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.611923  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:03.612153  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:03.612174  304371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0214 22:01:03.877480  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0214 22:01:03.877509  304371 main.go:141] libmachine: Checking connection to Docker...
	I0214 22:01:03.877519  304371 main.go:141] libmachine: (bridge-266997) Calling .GetURL
	I0214 22:01:03.878693  304371 main.go:141] libmachine: (bridge-266997) DBG | using libvirt version 6000000
	I0214 22:01:03.881358  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.881777  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.881808  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.882015  304371 main.go:141] libmachine: Docker is up and running!
	I0214 22:01:03.882031  304371 main.go:141] libmachine: Reticulating splines...
	I0214 22:01:03.882040  304371 client.go:171] duration metric: took 23.121294706s to LocalClient.Create
	I0214 22:01:03.882063  304371 start.go:167] duration metric: took 23.121376335s to libmachine.API.Create "bridge-266997"
	I0214 22:01:03.882075  304371 start.go:293] postStartSetup for "bridge-266997" (driver="kvm2")
	I0214 22:01:03.882086  304371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 22:01:03.882116  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:03.882342  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 22:01:03.882376  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:03.884877  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885218  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:03.885239  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:03.885378  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:03.885589  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:03.885735  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:03.885845  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:03.976177  304371 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 22:01:03.980618  304371 info.go:137] Remote host: Buildroot 2023.02.9
	I0214 22:01:03.980646  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/addons for local assets ...
	I0214 22:01:03.980710  304371 filesync.go:126] Scanning /home/jenkins/minikube-integration/20315-243456/.minikube/files for local assets ...
	I0214 22:01:03.980821  304371 filesync.go:149] local asset: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem -> 2507832.pem in /etc/ssl/certs
	I0214 22:01:03.980943  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 22:01:03.991483  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:04.025466  304371 start.go:296] duration metric: took 143.372996ms for postStartSetup
	I0214 22:01:04.025536  304371 main.go:141] libmachine: (bridge-266997) Calling .GetConfigRaw
	I0214 22:01:04.026327  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.029635  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030033  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.030057  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.030352  304371 profile.go:143] Saving config to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/config.json ...
	I0214 22:01:04.030586  304371 start.go:128] duration metric: took 23.29097433s to createHost
	I0214 22:01:04.030640  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.033610  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.033973  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.033998  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.034160  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.034303  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034507  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.034685  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.034832  304371 main.go:141] libmachine: Using SSH client type: native
	I0214 22:01:04.035026  304371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.81 22 <nil> <nil>}
	I0214 22:01:04.035041  304371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0214 22:01:04.164811  304371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739570464.136926718
	
	I0214 22:01:04.164832  304371 fix.go:216] guest clock: 1739570464.136926718
	I0214 22:01:04.164842  304371 fix.go:229] Guest: 2025-02-14 22:01:04.136926718 +0000 UTC Remote: 2025-02-14 22:01:04.030601008 +0000 UTC m=+24.065400357 (delta=106.32571ms)
	I0214 22:01:04.164866  304371 fix.go:200] guest clock delta is within tolerance: 106.32571ms
	I0214 22:01:04.164873  304371 start.go:83] releasing machines lock for "bridge-266997", held for 23.425433669s
	I0214 22:01:04.164896  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.165166  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:04.170113  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170541  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.170570  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.170778  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171367  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171550  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:04.171638  304371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 22:01:04.171684  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.171762  304371 ssh_runner.go:195] Run: cat /version.json
	I0214 22:01:04.171789  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:04.174819  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175456  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.175481  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.175607  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.175712  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.175787  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.175855  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.180293  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180297  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:04.180332  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:04.180351  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:04.180558  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:04.180770  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:04.180935  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:04.285108  304371 ssh_runner.go:195] Run: systemctl --version
	I0214 22:01:04.293451  304371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0214 22:01:04.463259  304371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0214 22:01:04.469147  304371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0214 22:01:04.469201  304371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 22:01:04.484729  304371 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 22:01:04.484747  304371 start.go:495] detecting cgroup driver to use...
	I0214 22:01:04.484800  304371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0214 22:01:04.502450  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0214 22:01:04.515492  304371 docker.go:217] disabling cri-docker service (if available) ...
	I0214 22:01:04.515540  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0214 22:01:04.528128  304371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0214 22:01:04.540475  304371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0214 22:01:04.666826  304371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0214 22:01:04.822228  304371 docker.go:233] disabling docker service ...
	I0214 22:01:04.822296  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0214 22:01:04.835915  304371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0214 22:01:04.848421  304371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0214 22:01:04.978701  304371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0214 22:01:05.096321  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0214 22:01:05.109638  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 22:01:05.127245  304371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0214 22:01:05.127289  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.137128  304371 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0214 22:01:05.137171  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.149215  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.161652  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.173632  304371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 22:01:05.184990  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.195432  304371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.211772  304371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0214 22:01:05.222080  304371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 22:01:05.231350  304371 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0214 22:01:05.231393  304371 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0214 22:01:05.244531  304371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 22:01:05.253659  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:05.368821  304371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0214 22:01:05.484555  304371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0214 22:01:05.484625  304371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0214 22:01:05.490439  304371 start.go:563] Will wait 60s for crictl version
	I0214 22:01:05.490512  304371 ssh_runner.go:195] Run: which crictl
	I0214 22:01:05.495575  304371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 22:01:05.546437  304371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0214 22:01:05.546517  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.585123  304371 ssh_runner.go:195] Run: crio --version
	I0214 22:01:05.622891  304371 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0214 22:01:03.016157  302662 addons.go:514] duration metric: took 1.184704963s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:03.064160  302662 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-266997" context rescaled to 1 replicas
	W0214 22:01:04.565870  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:02.111615  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:02.130034  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:02.130098  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:02.167633  296043 cri.go:89] found id: ""
	I0214 22:01:02.167669  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.167679  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:02.167687  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:02.167754  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:02.206752  296043 cri.go:89] found id: ""
	I0214 22:01:02.206778  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.206787  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:02.206793  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:02.206848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:02.242991  296043 cri.go:89] found id: ""
	I0214 22:01:02.243021  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.243033  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:02.243045  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:02.243112  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:02.284141  296043 cri.go:89] found id: ""
	I0214 22:01:02.284164  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.284172  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:02.284178  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:02.284217  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:02.329547  296043 cri.go:89] found id: ""
	I0214 22:01:02.329570  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.329577  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:02.329583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:02.329627  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:02.370731  296043 cri.go:89] found id: ""
	I0214 22:01:02.370758  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.370769  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:02.370778  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:02.370834  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:02.419069  296043 cri.go:89] found id: ""
	I0214 22:01:02.419102  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.419114  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:02.419122  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:02.419199  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:02.464600  296043 cri.go:89] found id: ""
	I0214 22:01:02.464636  296043 logs.go:282] 0 containers: []
	W0214 22:01:02.464655  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:02.464670  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:02.464690  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:02.480854  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:02.480890  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:02.572148  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:02.572175  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:02.572191  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:02.686587  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:02.686646  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:02.734413  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:02.734443  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.297012  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:05.310239  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:05.310303  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:05.344855  296043 cri.go:89] found id: ""
	I0214 22:01:05.344884  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.344895  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:05.344905  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:05.344962  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:05.390466  296043 cri.go:89] found id: ""
	I0214 22:01:05.390498  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.390510  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:05.390518  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:05.390575  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:05.442562  296043 cri.go:89] found id: ""
	I0214 22:01:05.442598  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.442611  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:05.442619  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:05.442707  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:05.482534  296043 cri.go:89] found id: ""
	I0214 22:01:05.482562  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.482577  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:05.482583  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:05.482659  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:05.526775  296043 cri.go:89] found id: ""
	I0214 22:01:05.526802  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.526813  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:05.526821  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:05.526887  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:05.566945  296043 cri.go:89] found id: ""
	I0214 22:01:05.566971  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.566979  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:05.566991  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:05.567050  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:05.610803  296043 cri.go:89] found id: ""
	I0214 22:01:05.610836  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.610849  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:05.610857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:05.610934  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:05.658446  296043 cri.go:89] found id: ""
	I0214 22:01:05.658475  296043 logs.go:282] 0 containers: []
	W0214 22:01:05.658485  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:05.658497  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:05.658512  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:05.731902  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:05.731929  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:05.731942  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:05.842065  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:05.842098  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:05.903308  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:05.903343  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:05.975417  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:05.975516  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:05.623928  304371 main.go:141] libmachine: (bridge-266997) Calling .GetIP
	I0214 22:01:05.627346  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.627929  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:05.627961  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:05.628196  304371 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0214 22:01:05.633410  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:05.650954  304371 kubeadm.go:875] updating cluster {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0214 22:01:05.651104  304371 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 22:01:05.651162  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:05.701425  304371 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0214 22:01:05.701507  304371 ssh_runner.go:195] Run: which lz4
	I0214 22:01:05.712837  304371 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0214 22:01:05.718837  304371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 22:01:05.718870  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0214 22:01:07.256269  304371 crio.go:462] duration metric: took 1.543466683s to copy over tarball
	I0214 22:01:07.256357  304371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 22:01:09.695876  304371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.439479772s)
	I0214 22:01:09.695918  304371 crio.go:469] duration metric: took 2.439614211s to extract the tarball
	I0214 22:01:09.695928  304371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 22:01:09.733290  304371 ssh_runner.go:195] Run: sudo crictl images --output json
	I0214 22:01:09.780117  304371 crio.go:514] all images are preloaded for cri-o runtime.
	I0214 22:01:09.780140  304371 cache_images.go:84] Images are preloaded, skipping loading
	I0214 22:01:09.780160  304371 kubeadm.go:926] updating node { 192.168.50.81 8443 v1.32.1 crio true true} ...
	I0214 22:01:09.780281  304371 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-266997 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0214 22:01:09.780367  304371 ssh_runner.go:195] Run: crio config
	I0214 22:01:09.827891  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:09.827918  304371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0214 22:01:09.827940  304371 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.81 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-266997 NodeName:bridge-266997 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 22:01:09.828092  304371 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-266997"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 22:01:09.828156  304371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0214 22:01:09.837899  304371 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 22:01:09.837957  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 22:01:09.847189  304371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0214 22:01:09.863880  304371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 22:01:09.881813  304371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0214 22:01:09.898828  304371 ssh_runner.go:195] Run: grep 192.168.50.81	control-plane.minikube.internal$ /etc/hosts
	I0214 22:01:09.902526  304371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 22:01:09.914292  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:10.040048  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:10.057372  304371 certs.go:68] Setting up /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997 for IP: 192.168.50.81
	I0214 22:01:10.057391  304371 certs.go:194] generating shared ca certs ...
	I0214 22:01:10.057407  304371 certs.go:226] acquiring lock for ca certs: {Name:mk43b22b1c0ea62ac748492a836a372fe73583cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.057580  304371 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key
	I0214 22:01:10.057639  304371 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key
	I0214 22:01:10.057653  304371 certs.go:256] generating profile certs ...
	I0214 22:01:10.057737  304371 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key
	I0214 22:01:10.057770  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt with IP's: []
	I0214 22:01:10.492985  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt ...
	I0214 22:01:10.493014  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.crt: {Name:mk0e9a544ab62bf3bac0aeef07e33db8d1284119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493211  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key ...
	I0214 22:01:10.493229  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/client.key: {Name:mk822ad23de6909e3dcaa3a4b87a06fbdfba8176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.493342  304371 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201
	I0214 22:01:10.493362  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.81]
	I0214 22:01:10.673628  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 ...
	I0214 22:01:10.673651  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201: {Name:mka33ef1d0779dee85a1340cd519c438b531f8ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673787  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 ...
	I0214 22:01:10.673801  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201: {Name:mk2bcfa59be0eef44107f0d874f0a177271d56dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.673881  304371 certs.go:381] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt
	I0214 22:01:10.673969  304371 certs.go:385] copying /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key.981ed201 -> /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key
	I0214 22:01:10.674034  304371 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key
	I0214 22:01:10.674051  304371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt with IP's: []
	I0214 22:01:10.815875  304371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt ...
	I0214 22:01:10.815900  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt: {Name:mk07fc7632bf05ef6abf8667a18602d64842bf54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816040  304371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key ...
	I0214 22:01:10.816054  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key: {Name:mk49f50231c8caf0067f42cee0eef760808a4f92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:10.816226  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem (1338 bytes)
	W0214 22:01:10.816268  304371 certs.go:480] ignoring /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783_empty.pem, impossibly tiny 0 bytes
	I0214 22:01:10.816279  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 22:01:10.816311  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/ca.pem (1082 bytes)
	I0214 22:01:10.816343  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/cert.pem (1123 bytes)
	I0214 22:01:10.816367  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/certs/key.pem (1675 bytes)
	I0214 22:01:10.816410  304371 certs.go:484] found cert: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem (1708 bytes)
	I0214 22:01:10.817057  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 22:01:10.849496  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0214 22:01:10.873071  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 22:01:10.898240  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 22:01:10.921216  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0214 22:01:10.944392  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 22:01:10.968476  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 22:01:10.994710  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/bridge-266997/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 22:01:11.019089  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/certs/250783.pem --> /usr/share/ca-certificates/250783.pem (1338 bytes)
	I0214 22:01:11.041841  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/ssl/certs/2507832.pem --> /usr/share/ca-certificates/2507832.pem (1708 bytes)
	I0214 22:01:11.064672  304371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 22:01:11.087698  304371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 22:01:11.105733  304371 ssh_runner.go:195] Run: openssl version
	I0214 22:01:11.113022  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250783.pem && ln -fs /usr/share/ca-certificates/250783.pem /etc/ssl/certs/250783.pem"
	I0214 22:01:11.124173  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128829  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 14 20:52 /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.128877  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250783.pem
	I0214 22:01:11.134956  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250783.pem /etc/ssl/certs/51391683.0"
	I0214 22:01:11.145646  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2507832.pem && ln -fs /usr/share/ca-certificates/2507832.pem /etc/ssl/certs/2507832.pem"
	I0214 22:01:11.156620  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.160984  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 14 20:52 /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.161023  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2507832.pem
	I0214 22:01:11.166639  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2507832.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 22:01:11.177621  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 22:01:11.189431  304371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193866  304371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 14 20:45 /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.193907  304371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 22:01:11.199670  304371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 22:01:11.210845  304371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0214 22:01:11.214693  304371 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0214 22:01:11.214742  304371 kubeadm.go:392] StartCluster: {Name:bridge-266997 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-266997 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 22:01:11.214826  304371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0214 22:01:11.214862  304371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0214 22:01:11.258711  304371 cri.go:89] found id: ""
	I0214 22:01:11.258765  304371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 22:01:11.269032  304371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:01:11.279047  304371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:01:11.288803  304371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:01:11.288822  304371 kubeadm.go:157] found existing configuration files:
	
	I0214 22:01:11.288862  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:01:11.298148  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:01:11.298188  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:01:11.307741  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:01:11.316856  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:01:11.316903  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:01:11.326555  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.335896  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:01:11.335935  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:01:11.345669  304371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:01:11.355306  304371 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:01:11.355357  304371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:01:11.364907  304371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:01:11.427252  304371 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0214 22:01:11.427326  304371 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:01:11.531552  304371 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:01:11.531691  304371 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:01:11.531851  304371 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0214 22:01:11.543555  304371 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0214 22:01:07.185994  302662 node_ready.go:57] node "flannel-266997" has "Ready":"False" status (will retry)
	I0214 22:01:08.565172  302662 node_ready.go:49] node "flannel-266997" is "Ready"
	I0214 22:01:08.565220  302662 node_ready.go:38] duration metric: took 6.004932024s for node "flannel-266997" to be "Ready" ...
	I0214 22:01:08.565240  302662 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:08.565299  302662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.602874  302662 api_server.go:72] duration metric: took 6.771445737s to wait for apiserver process to appear ...
	I0214 22:01:08.602902  302662 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:08.602925  302662 api_server.go:253] Checking apiserver healthz at https://192.168.61.227:8443/healthz ...
	I0214 22:01:08.611745  302662 api_server.go:279] https://192.168.61.227:8443/healthz returned 200:
	ok
	I0214 22:01:08.612774  302662 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:08.612800  302662 api_server.go:131] duration metric: took 9.890538ms to wait for apiserver health ...
	I0214 22:01:08.612810  302662 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:08.617075  302662 system_pods.go:59] 7 kube-system pods found
	I0214 22:01:08.617117  302662 system_pods.go:61] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.617131  302662 system_pods.go:61] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.617140  302662 system_pods.go:61] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.617151  302662 system_pods.go:61] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.617162  302662 system_pods.go:61] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.617176  302662 system_pods.go:61] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.617187  302662 system_pods.go:61] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.617199  302662 system_pods.go:74] duration metric: took 4.381701ms to wait for pod list to return data ...
	I0214 22:01:08.617213  302662 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:08.620515  302662 default_sa.go:45] found service account: "default"
	I0214 22:01:08.620531  302662 default_sa.go:55] duration metric: took 3.308722ms for default service account to be created ...
	I0214 22:01:08.620537  302662 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:08.628163  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.628196  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.628205  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.628217  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.628232  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.628242  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.628250  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.628261  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.628286  302662 retry.go:31] will retry after 229.157349ms: missing components: kube-dns
	I0214 22:01:08.862237  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:08.862283  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:08.862293  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:08.862304  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:08.862315  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:08.862322  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:08.862330  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:08.862346  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:08.862370  302662 retry.go:31] will retry after 313.437713ms: missing components: kube-dns
	I0214 22:01:09.180643  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.180698  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.180709  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.180720  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.180732  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.180741  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.180751  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.180762  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.180785  302662 retry.go:31] will retry after 300.968731ms: missing components: kube-dns
	I0214 22:01:09.485817  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.485866  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.485876  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.485888  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.485897  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.485903  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.485914  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.485919  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.485947  302662 retry.go:31] will retry after 439.51358ms: missing components: kube-dns
	I0214 22:01:09.929653  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:09.929691  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:09.929699  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:09.929711  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:09.929724  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:09.929734  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:09.929747  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:09.929753  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:09.929778  302662 retry.go:31] will retry after 485.567052ms: missing components: kube-dns
	I0214 22:01:10.418771  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:10.418804  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:10.418813  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:10.418823  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:10.418833  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:10.418840  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:10.418848  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:10.418856  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:10.418873  302662 retry.go:31] will retry after 756.594325ms: missing components: kube-dns
	I0214 22:01:11.179962  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:11.179995  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:11.180004  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:11.180012  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:11.180022  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 22:01:11.180032  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:11.180043  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:11.180052  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:11.180085  302662 retry.go:31] will retry after 1.009789241s: missing components: kube-dns
	I0214 22:01:08.494769  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:08.514374  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:08.514458  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:08.561822  296043 cri.go:89] found id: ""
	I0214 22:01:08.561850  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.561859  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:08.561865  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:08.561912  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:08.602005  296043 cri.go:89] found id: ""
	I0214 22:01:08.602038  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.602051  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:08.602059  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:08.602136  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:08.642584  296043 cri.go:89] found id: ""
	I0214 22:01:08.642612  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.642636  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:08.642647  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:08.642725  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:08.677455  296043 cri.go:89] found id: ""
	I0214 22:01:08.677490  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.677506  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:08.677514  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:08.677579  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:08.723982  296043 cri.go:89] found id: ""
	I0214 22:01:08.724032  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.724046  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:08.724056  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:08.724129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:08.775467  296043 cri.go:89] found id: ""
	I0214 22:01:08.775503  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.775516  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:08.775525  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:08.775587  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:08.820143  296043 cri.go:89] found id: ""
	I0214 22:01:08.820187  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.820209  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:08.820218  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:08.820289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:08.855406  296043 cri.go:89] found id: ""
	I0214 22:01:08.855437  296043 logs.go:282] 0 containers: []
	W0214 22:01:08.855448  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:08.855460  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:08.855476  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:08.914025  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:08.914052  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:08.927679  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:08.927708  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:09.029673  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:09.029699  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:09.029717  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:09.113311  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:09.113358  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:11.659812  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:11.673901  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:11.673974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:11.710824  296043 cri.go:89] found id: ""
	I0214 22:01:11.710856  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.710868  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:11.710877  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:11.710939  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:11.749955  296043 cri.go:89] found id: ""
	I0214 22:01:11.749996  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.750009  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:11.750034  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:11.750109  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:11.784268  296043 cri.go:89] found id: ""
	I0214 22:01:11.784296  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.784308  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:11.784317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:11.784381  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:11.565511  304371 out.go:235]   - Generating certificates and keys ...
	I0214 22:01:11.565641  304371 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:01:11.565736  304371 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:01:11.597156  304371 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 22:01:11.777564  304371 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0214 22:01:12.000290  304371 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0214 22:01:12.274579  304371 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0214 22:01:12.340720  304371 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0214 22:01:12.341077  304371 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.592390  304371 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0214 22:01:12.592731  304371 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-266997 localhost] and IPs [192.168.50.81 127.0.0.1 ::1]
	I0214 22:01:12.789172  304371 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 22:01:12.860794  304371 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 22:01:12.958408  304371 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0214 22:01:12.958673  304371 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:01:13.132122  304371 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:01:13.373236  304371 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0214 22:01:13.504795  304371 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:01:13.776085  304371 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:01:14.088313  304371 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:01:14.089020  304371 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:01:14.093447  304371 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:01:14.095224  304371 out.go:235]   - Booting up control plane ...
	I0214 22:01:14.095351  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:01:14.095464  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:01:14.095532  304371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:01:14.111383  304371 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:01:14.118029  304371 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:01:14.118117  304371 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:01:14.266373  304371 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0214 22:01:14.266491  304371 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0214 22:01:14.767156  304371 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.155046ms
	I0214 22:01:14.767269  304371 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0214 22:01:12.399215  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:12.399250  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:12.399257  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:12.399265  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:12.399271  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:12.399279  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:12.399285  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:12.399296  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:12.399322  302662 retry.go:31] will retry after 1.435229105s: missing components: kube-dns
	I0214 22:01:13.838510  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:13.838553  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:13.838563  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:13.838572  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:13.838579  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:13.838584  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:13.838590  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:13.838599  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:13.838619  302662 retry.go:31] will retry after 1.229976943s: missing components: kube-dns
	I0214 22:01:15.072944  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:15.072987  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:15.072997  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:15.073007  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:15.073017  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:15.073024  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:15.073034  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:15.073042  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:15.073077  302662 retry.go:31] will retry after 1.417685153s: missing components: kube-dns
	I0214 22:01:16.494415  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:16.494450  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:16.494456  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:16.494463  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:16.494467  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:16.494471  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:16.494475  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:16.494478  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:16.494495  302662 retry.go:31] will retry after 2.360792167s: missing components: kube-dns
	I0214 22:01:11.822362  296043 cri.go:89] found id: ""
	I0214 22:01:11.822387  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.822395  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:11.822401  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:11.822462  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:11.860753  296043 cri.go:89] found id: ""
	I0214 22:01:11.860778  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.860786  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:11.860791  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:11.860833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:11.901670  296043 cri.go:89] found id: ""
	I0214 22:01:11.901697  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.901709  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:11.901717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:11.901779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:11.939194  296043 cri.go:89] found id: ""
	I0214 22:01:11.939220  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.939230  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:11.939236  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:11.939289  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:11.973819  296043 cri.go:89] found id: ""
	I0214 22:01:11.973846  296043 logs.go:282] 0 containers: []
	W0214 22:01:11.973857  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:11.973869  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:11.973882  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:12.052290  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:12.052321  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:12.099732  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:12.099775  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:12.163962  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:12.163994  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:12.181579  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:12.181625  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:12.272639  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:14.774322  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:14.787244  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:14.787299  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:14.820977  296043 cri.go:89] found id: ""
	I0214 22:01:14.821011  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.821024  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:14.821034  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:14.821099  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:14.852858  296043 cri.go:89] found id: ""
	I0214 22:01:14.852879  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.852888  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:14.852893  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:14.852947  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:14.896441  296043 cri.go:89] found id: ""
	I0214 22:01:14.896464  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.896475  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:14.896483  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:14.896535  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:14.930673  296043 cri.go:89] found id: ""
	I0214 22:01:14.930700  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.930712  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:14.930719  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:14.930776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:14.972676  296043 cri.go:89] found id: ""
	I0214 22:01:14.972708  296043 logs.go:282] 0 containers: []
	W0214 22:01:14.972721  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:14.972729  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:14.972797  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:15.009271  296043 cri.go:89] found id: ""
	I0214 22:01:15.009303  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.009314  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:15.009323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:15.009406  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:15.045975  296043 cri.go:89] found id: ""
	I0214 22:01:15.046007  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.046021  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:15.046029  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:15.046102  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:15.084924  296043 cri.go:89] found id: ""
	I0214 22:01:15.084956  296043 logs.go:282] 0 containers: []
	W0214 22:01:15.084967  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:15.084980  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:15.084995  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:15.143553  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:15.143587  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:15.158649  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:15.158687  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:15.235319  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:15.235343  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:15.235363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:15.324951  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:15.324990  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:19.266915  304371 kubeadm.go:310] [api-check] The API server is healthy after 4.501226967s
	I0214 22:01:19.286682  304371 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 22:01:19.300140  304371 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 22:01:19.320686  304371 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 22:01:19.320946  304371 kubeadm.go:310] [mark-control-plane] Marking the node bridge-266997 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 22:01:19.338179  304371 kubeadm.go:310] [bootstrap-token] Using token: 4eaob3.8jnji5hz23dblskn
	I0214 22:01:19.339524  304371 out.go:235]   - Configuring RBAC rules ...
	I0214 22:01:19.339671  304371 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 22:01:19.345535  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 22:01:19.356239  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 22:01:19.363770  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 22:01:19.366981  304371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 22:01:19.371513  304371 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 22:01:19.672166  304371 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 22:01:20.099981  304371 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0214 22:01:20.669741  304371 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0214 22:01:20.671058  304371 kubeadm.go:310] 
	I0214 22:01:20.671186  304371 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0214 22:01:20.671210  304371 kubeadm.go:310] 
	I0214 22:01:20.671373  304371 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0214 22:01:20.671393  304371 kubeadm.go:310] 
	I0214 22:01:20.671428  304371 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0214 22:01:20.671511  304371 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 22:01:20.671588  304371 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 22:01:20.671598  304371 kubeadm.go:310] 
	I0214 22:01:20.671681  304371 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0214 22:01:20.671694  304371 kubeadm.go:310] 
	I0214 22:01:20.671769  304371 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 22:01:20.671784  304371 kubeadm.go:310] 
	I0214 22:01:20.671862  304371 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0214 22:01:20.671971  304371 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 22:01:20.672051  304371 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 22:01:20.672059  304371 kubeadm.go:310] 
	I0214 22:01:20.672173  304371 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 22:01:20.672270  304371 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0214 22:01:20.672278  304371 kubeadm.go:310] 
	I0214 22:01:20.672403  304371 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.672552  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b \
	I0214 22:01:20.672586  304371 kubeadm.go:310] 	--control-plane 
	I0214 22:01:20.672596  304371 kubeadm.go:310] 
	I0214 22:01:20.672722  304371 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0214 22:01:20.672757  304371 kubeadm.go:310] 
	I0214 22:01:20.672884  304371 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4eaob3.8jnji5hz23dblskn \
	I0214 22:01:20.673034  304371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b8fd646ea82b3f65888cc89110cf427382759d7118a60e245f3549e23ff98d6b 
	I0214 22:01:20.673551  304371 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:01:20.673583  304371 cni.go:84] Creating CNI manager for "bridge"
	I0214 22:01:20.674803  304371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 22:01:18.859941  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:18.859975  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:18.859981  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:18.859987  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:18.859991  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:18.859996  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:18.860000  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:18.860004  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:18.860019  302662 retry.go:31] will retry after 2.716114002s: missing components: kube-dns
	I0214 22:01:17.869522  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:17.886022  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:17.886114  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:17.926259  296043 cri.go:89] found id: ""
	I0214 22:01:17.926287  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.926296  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:17.926302  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:17.926358  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:17.989648  296043 cri.go:89] found id: ""
	I0214 22:01:17.989675  296043 logs.go:282] 0 containers: []
	W0214 22:01:17.989683  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:17.989689  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:17.989744  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:18.041262  296043 cri.go:89] found id: ""
	I0214 22:01:18.041295  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.041307  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:18.041315  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:18.041380  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:18.080028  296043 cri.go:89] found id: ""
	I0214 22:01:18.080059  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.080069  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:18.080075  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:18.080134  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:18.116135  296043 cri.go:89] found id: ""
	I0214 22:01:18.116163  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.116172  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:18.116179  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:18.116239  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:18.148268  296043 cri.go:89] found id: ""
	I0214 22:01:18.148302  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.148315  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:18.148323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:18.148399  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:18.180352  296043 cri.go:89] found id: ""
	I0214 22:01:18.180378  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.180388  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:18.180394  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:18.180438  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:18.211513  296043 cri.go:89] found id: ""
	I0214 22:01:18.211534  296043 logs.go:282] 0 containers: []
	W0214 22:01:18.211541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:18.211551  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:18.211562  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:18.260797  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:18.260831  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:18.273477  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:18.273503  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:18.340163  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:18.340182  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:18.340193  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:18.413927  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:18.413950  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:20.952238  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:20.964925  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:20.964984  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:21.000265  296043 cri.go:89] found id: ""
	I0214 22:01:21.000295  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.000306  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:21.000314  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:21.000376  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:21.042754  296043 cri.go:89] found id: ""
	I0214 22:01:21.042780  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.042790  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:21.042798  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:21.042862  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:21.078636  296043 cri.go:89] found id: ""
	I0214 22:01:21.078664  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.078676  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:21.078684  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:21.078747  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:21.112023  296043 cri.go:89] found id: ""
	I0214 22:01:21.112050  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.112058  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:21.112067  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:21.112129  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:21.147419  296043 cri.go:89] found id: ""
	I0214 22:01:21.147451  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.147462  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:21.147470  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:21.147541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:21.180151  296043 cri.go:89] found id: ""
	I0214 22:01:21.180191  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.180201  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:21.180209  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:21.180271  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:21.215007  296043 cri.go:89] found id: ""
	I0214 22:01:21.215037  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.215049  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:21.215057  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:21.215122  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:21.247912  296043 cri.go:89] found id: ""
	I0214 22:01:21.247953  296043 logs.go:282] 0 containers: []
	W0214 22:01:21.247964  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:21.247976  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:21.247992  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:21.300392  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:21.300429  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:21.313583  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:21.313604  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:21.381863  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:21.381888  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:21.381902  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:21.460562  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:21.460591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:21.580732  302662 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:21.580767  302662 system_pods.go:89] "coredns-668d6bf9bc-vlb9g" [38bddc8c-6af9-4458-966d-b92378047d91] Running
	I0214 22:01:21.580773  302662 system_pods.go:89] "etcd-flannel-266997" [98359e38-1273-4b3d-adbd-b7737bb03f36] Running
	I0214 22:01:21.580777  302662 system_pods.go:89] "kube-apiserver-flannel-266997" [bdbaed79-d702-469c-807e-7c2be4afcd35] Running
	I0214 22:01:21.580781  302662 system_pods.go:89] "kube-controller-manager-flannel-266997" [c407f38f-85b9-47b4-8af6-f247c8eb06f9] Running
	I0214 22:01:21.580785  302662 system_pods.go:89] "kube-proxy-lnlt5" [c19e0141-bd45-4e5b-b97e-e8c7aa330dc9] Running
	I0214 22:01:21.580789  302662 system_pods.go:89] "kube-scheduler-flannel-266997" [ccd3df38-db24-481b-90ca-e5015e37a686] Running
	I0214 22:01:21.580792  302662 system_pods.go:89] "storage-provisioner" [0f99c3ce-eaa7-4cb3-abef-31236c72e2c1] Running
	I0214 22:01:21.580800  302662 system_pods.go:126] duration metric: took 12.960258845s to wait for k8s-apps to be running ...
	I0214 22:01:21.580808  302662 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:21.580852  302662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:21.596764  302662 system_svc.go:56] duration metric: took 15.934258ms WaitForService to wait for kubelet
	I0214 22:01:21.596793  302662 kubeadm.go:578] duration metric: took 19.765370857s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:21.596814  302662 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:21.601648  302662 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:21.601680  302662 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:21.601700  302662 node_conditions.go:105] duration metric: took 4.879566ms to run NodePressure ...
	I0214 22:01:21.601715  302662 start.go:241] waiting for startup goroutines ...
	I0214 22:01:21.601731  302662 start.go:246] waiting for cluster config update ...
	I0214 22:01:21.601749  302662 start.go:255] writing updated cluster config ...
	I0214 22:01:21.602045  302662 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:21.607012  302662 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:21.610715  302662 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.619683  302662 pod_ready.go:94] pod "coredns-668d6bf9bc-vlb9g" is "Ready"
	I0214 22:01:21.619715  302662 pod_ready.go:86] duration metric: took 8.975726ms for pod "coredns-668d6bf9bc-vlb9g" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.621747  302662 pod_ready.go:83] waiting for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.625095  302662 pod_ready.go:94] pod "etcd-flannel-266997" is "Ready"
	I0214 22:01:21.625112  302662 pod_ready.go:86] duration metric: took 3.349739ms for pod "etcd-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.626839  302662 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.630189  302662 pod_ready.go:94] pod "kube-apiserver-flannel-266997" is "Ready"
	I0214 22:01:21.630205  302662 pod_ready.go:86] duration metric: took 3.350537ms for pod "kube-apiserver-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:21.631966  302662 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.010234  302662 pod_ready.go:94] pod "kube-controller-manager-flannel-266997" is "Ready"
	I0214 22:01:22.010258  302662 pod_ready.go:86] duration metric: took 378.271702ms for pod "kube-controller-manager-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.210925  302662 pod_ready.go:83] waiting for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.610516  302662 pod_ready.go:94] pod "kube-proxy-lnlt5" is "Ready"
	I0214 22:01:22.610544  302662 pod_ready.go:86] duration metric: took 399.590168ms for pod "kube-proxy-lnlt5" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:22.810190  302662 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210781  302662 pod_ready.go:94] pod "kube-scheduler-flannel-266997" is "Ready"
	I0214 22:01:23.210809  302662 pod_ready.go:86] duration metric: took 400.595935ms for pod "kube-scheduler-flannel-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:23.210825  302662 pod_ready.go:40] duration metric: took 1.603788898s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:23.254724  302662 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:23.256280  302662 out.go:177] * Done! kubectl is now configured to use "flannel-266997" cluster and "default" namespace by default
	I0214 22:01:20.675853  304371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 22:01:20.687674  304371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0214 22:01:20.710977  304371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 22:01:20.711051  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:20.711136  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-266997 minikube.k8s.io/updated_at=2025_02_14T22_01_20_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fbfc2b66c4da8e6c2b2b8466356b2fbd8038ee5a minikube.k8s.io/name=bridge-266997 minikube.k8s.io/primary=true
	I0214 22:01:20.857437  304371 ops.go:34] apiserver oom_adj: -16
	I0214 22:01:20.857573  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.357978  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:21.858196  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.357909  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:22.858323  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.358263  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:23.858483  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.358410  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:24.857672  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.358214  304371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 22:01:25.477742  304371 kubeadm.go:1105] duration metric: took 4.766743198s to wait for elevateKubeSystemPrivileges
	I0214 22:01:25.477787  304371 kubeadm.go:394] duration metric: took 14.263049181s to StartCluster
	I0214 22:01:25.477813  304371 settings.go:142] acquiring lock: {Name:mk406b901c9269f9ada66e0a2003d97b72f37c69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.477894  304371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 22:01:25.479312  304371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20315-243456/kubeconfig: {Name:mk8f367f144477b5c9c2379936e6834623246b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 22:01:25.479566  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 22:01:25.479594  304371 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0214 22:01:25.479566  304371 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.81 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0214 22:01:25.479695  304371 addons.go:69] Setting default-storageclass=true in profile "bridge-266997"
	I0214 22:01:25.479721  304371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-266997"
	I0214 22:01:25.479683  304371 addons.go:69] Setting storage-provisioner=true in profile "bridge-266997"
	I0214 22:01:25.479825  304371 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 22:01:25.479828  304371 addons.go:238] Setting addon storage-provisioner=true in "bridge-266997"
	I0214 22:01:25.479933  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.480344  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480370  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.480383  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.480400  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.481183  304371 out.go:177] * Verifying Kubernetes components...
	I0214 22:01:25.482440  304371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 22:01:25.495953  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42079
	I0214 22:01:25.495973  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0214 22:01:25.496360  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496536  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.496851  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.496873  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497082  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.497104  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.497237  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.497486  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.497490  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.498041  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.498075  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.500794  304371 addons.go:238] Setting addon default-storageclass=true in "bridge-266997"
	I0214 22:01:25.500829  304371 host.go:66] Checking if "bridge-266997" exists ...
	I0214 22:01:25.501072  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.501096  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.512606  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45155
	I0214 22:01:25.512964  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.513385  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.513407  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.513770  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.513947  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.515505  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.517101  304371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 22:01:25.518333  304371 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.518354  304371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 22:01:25.518373  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.520011  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43585
	I0214 22:01:25.520422  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.520847  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.520869  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.521183  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.521437  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.521710  304371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 22:01:25.521753  304371 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 22:01:25.521881  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.521906  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.522179  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.522387  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.522543  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.522708  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.535515  304371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0214 22:01:25.535896  304371 main.go:141] libmachine: () Calling .GetVersion
	I0214 22:01:25.536315  304371 main.go:141] libmachine: Using API Version  1
	I0214 22:01:25.536343  304371 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 22:01:25.536695  304371 main.go:141] libmachine: () Calling .GetMachineName
	I0214 22:01:25.536861  304371 main.go:141] libmachine: (bridge-266997) Calling .GetState
	I0214 22:01:25.538765  304371 main.go:141] libmachine: (bridge-266997) Calling .DriverName
	I0214 22:01:25.538948  304371 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:25.538962  304371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 22:01:25.538976  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHHostname
	I0214 22:01:25.541815  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542297  304371 main.go:141] libmachine: (bridge-266997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:15:b0", ip: ""} in network mk-bridge-266997: {Iface:virbr2 ExpiryTime:2025-02-14 23:00:56 +0000 UTC Type:0 Mac:52:54:00:b2:15:b0 Iaid: IPaddr:192.168.50.81 Prefix:24 Hostname:bridge-266997 Clientid:01:52:54:00:b2:15:b0}
	I0214 22:01:25.542316  304371 main.go:141] libmachine: (bridge-266997) DBG | domain bridge-266997 has defined IP address 192.168.50.81 and MAC address 52:54:00:b2:15:b0 in network mk-bridge-266997
	I0214 22:01:25.542488  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHPort
	I0214 22:01:25.542694  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHKeyPath
	I0214 22:01:25.542878  304371 main.go:141] libmachine: (bridge-266997) Calling .GetSSHUsername
	I0214 22:01:25.543023  304371 sshutil.go:53] new ssh client: &{IP:192.168.50.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/bridge-266997/id_rsa Username:docker}
	I0214 22:01:25.709288  304371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0214 22:01:25.709340  304371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 22:01:25.818938  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 22:01:25.883618  304371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 22:01:26.231097  304371 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0214 22:01:26.232118  304371 node_ready.go:35] waiting up to 15m0s for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244261  304371 node_ready.go:49] node "bridge-266997" is "Ready"
	I0214 22:01:26.244293  304371 node_ready.go:38] duration metric: took 12.148864ms for node "bridge-266997" to be "Ready" ...
	I0214 22:01:26.244325  304371 api_server.go:52] waiting for apiserver process to appear ...
	I0214 22:01:26.244387  304371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:26.454003  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454033  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454062  304371 api_server.go:72] duration metric: took 974.324958ms to wait for apiserver process to appear ...
	I0214 22:01:26.454104  304371 api_server.go:88] waiting for apiserver healthz status ...
	I0214 22:01:26.454137  304371 api_server.go:253] Checking apiserver healthz at https://192.168.50.81:8443/healthz ...
	I0214 22:01:26.454282  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454299  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454449  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454476  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454486  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454495  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454560  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454577  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454580  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.454586  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.454600  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.454869  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.454887  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.454929  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.457012  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.457107  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.457041  304371 main.go:141] libmachine: (bridge-266997) DBG | Closing plugin on server side
	I0214 22:01:26.464354  304371 api_server.go:279] https://192.168.50.81:8443/healthz returned 200:
	ok
	I0214 22:01:26.465264  304371 api_server.go:141] control plane version: v1.32.1
	I0214 22:01:26.465285  304371 api_server.go:131] duration metric: took 11.170116ms to wait for apiserver health ...
	I0214 22:01:26.465296  304371 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 22:01:26.471233  304371 main.go:141] libmachine: Making call to close driver server
	I0214 22:01:26.471249  304371 main.go:141] libmachine: (bridge-266997) Calling .Close
	I0214 22:01:26.471450  304371 main.go:141] libmachine: Successfully made call to close driver server
	I0214 22:01:26.471473  304371 main.go:141] libmachine: Making call to close connection to plugin binary
	I0214 22:01:26.471853  304371 system_pods.go:59] 8 kube-system pods found
	I0214 22:01:26.471889  304371 system_pods.go:61] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471903  304371 system_pods.go:61] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.471917  304371 system_pods.go:61] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.471930  304371 system_pods.go:61] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.471941  304371 system_pods.go:61] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.471957  304371 system_pods.go:61] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.471966  304371 system_pods.go:61] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.471979  304371 system_pods.go:61] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending
	I0214 22:01:26.471988  304371 system_pods.go:74] duration metric: took 6.684999ms to wait for pod list to return data ...
	I0214 22:01:26.472001  304371 default_sa.go:34] waiting for default service account to be created ...
	I0214 22:01:26.472806  304371 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 22:01:24.002770  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:24.015631  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:24.015700  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:24.051601  296043 cri.go:89] found id: ""
	I0214 22:01:24.051637  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.051649  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:24.051657  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:24.051710  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:24.084938  296043 cri.go:89] found id: ""
	I0214 22:01:24.084963  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.084971  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:24.084977  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:24.085019  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:24.118982  296043 cri.go:89] found id: ""
	I0214 22:01:24.119012  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.119023  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:24.119030  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:24.119091  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:24.150809  296043 cri.go:89] found id: ""
	I0214 22:01:24.150838  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.150849  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:24.150857  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:24.150927  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:24.180499  296043 cri.go:89] found id: ""
	I0214 22:01:24.180527  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.180538  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:24.180546  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:24.180613  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:24.214503  296043 cri.go:89] found id: ""
	I0214 22:01:24.214531  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.214542  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:24.214550  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:24.214616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:24.250992  296043 cri.go:89] found id: ""
	I0214 22:01:24.251018  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.251026  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:24.251032  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:24.251090  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:24.287791  296043 cri.go:89] found id: ""
	I0214 22:01:24.287816  296043 logs.go:282] 0 containers: []
	W0214 22:01:24.287824  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:24.287839  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:24.287854  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:24.324499  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:24.324533  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:24.373673  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:24.373700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:24.387527  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:24.387558  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:24.464362  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:24.464394  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:24.464409  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:26.474033  304371 addons.go:514] duration metric: took 994.441902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0214 22:01:26.476260  304371 default_sa.go:45] found service account: "default"
	I0214 22:01:26.476283  304371 default_sa.go:55] duration metric: took 4.273083ms for default service account to be created ...
	I0214 22:01:26.476293  304371 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 22:01:26.480354  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.480386  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480397  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.480410  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.480419  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.480429  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.480435  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.480445  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.480457  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.480479  304371 retry.go:31] will retry after 268.412371ms: missing components: kube-dns
	I0214 22:01:26.734480  304371 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-266997" context rescaled to 1 replicas
	I0214 22:01:26.752596  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:26.752625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752632  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:26.752639  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:26.752645  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:26.752649  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:26.752654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:26.752663  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:26.752668  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:26.752683  304371 retry.go:31] will retry after 253.744271ms: missing components: kube-dns
	I0214 22:01:27.010128  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.010160  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010169  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.010176  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.010182  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.010187  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.010190  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.010195  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.010200  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0214 22:01:27.010215  304371 retry.go:31] will retry after 373.755847ms: missing components: kube-dns
	I0214 22:01:27.387928  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.387976  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.387988  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.388001  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.388015  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.388022  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.388031  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.388040  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.388048  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.388073  304371 retry.go:31] will retry after 449.518817ms: missing components: kube-dns
	I0214 22:01:27.841591  304371 system_pods.go:86] 8 kube-system pods found
	I0214 22:01:27.841625  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841633  304371 system_pods.go:89] "coredns-668d6bf9bc-pkfx6" [d9a02ec0-bc3d-45d2-a0dd-54d7e6ac8185] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:27.841640  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:27.841646  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:27.841650  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:27.841654  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:27.841661  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:27.841664  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:27.841680  304371 retry.go:31] will retry after 522.702646ms: missing components: kube-dns
	I0214 22:01:28.368689  304371 system_pods.go:86] 7 kube-system pods found
	I0214 22:01:28.368725  304371 system_pods.go:89] "coredns-668d6bf9bc-m2ggw" [55a33121-2ec9-40f5-8886-0545de26c351] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0214 22:01:28.368733  304371 system_pods.go:89] "etcd-bridge-266997" [f930163d-ad68-475d-a172-97f652ad1ffc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 22:01:28.368741  304371 system_pods.go:89] "kube-apiserver-bridge-266997" [deba13a7-3d6c-40f7-9d98-90c803b1cc86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 22:01:28.368746  304371 system_pods.go:89] "kube-controller-manager-bridge-266997" [ab342af3-49a9-4a13-8190-9268ad85a92e] Running
	I0214 22:01:28.368753  304371 system_pods.go:89] "kube-proxy-xdwmc" [2384ba6f-6467-4557-b997-5445fa988ea8] Running
	I0214 22:01:28.368761  304371 system_pods.go:89] "kube-scheduler-bridge-266997" [bbd9511e-c2a6-4c91-9b74-fc8598cd9273] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 22:01:28.368765  304371 system_pods.go:89] "storage-provisioner" [5c333af2-bc4f-4fc9-8d88-0349f03eff5d] Running
	I0214 22:01:28.368774  304371 system_pods.go:126] duration metric: took 1.892474517s to wait for k8s-apps to be running ...
	I0214 22:01:28.368785  304371 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 22:01:28.368830  304371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:01:28.383657  304371 system_svc.go:56] duration metric: took 14.862939ms WaitForService to wait for kubelet
	I0214 22:01:28.383685  304371 kubeadm.go:578] duration metric: took 2.903970849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 22:01:28.383703  304371 node_conditions.go:102] verifying NodePressure condition ...
	I0214 22:01:28.387139  304371 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0214 22:01:28.387163  304371 node_conditions.go:123] node cpu capacity is 2
	I0214 22:01:28.387176  304371 node_conditions.go:105] duration metric: took 3.468187ms to run NodePressure ...
	I0214 22:01:28.387187  304371 start.go:241] waiting for startup goroutines ...
	I0214 22:01:28.387200  304371 start.go:246] waiting for cluster config update ...
	I0214 22:01:28.387215  304371 start.go:255] writing updated cluster config ...
	I0214 22:01:28.387551  304371 ssh_runner.go:195] Run: rm -f paused
	I0214 22:01:28.391627  304371 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:28.395108  304371 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:27.040249  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:27.052990  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:27.053055  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:27.092109  296043 cri.go:89] found id: ""
	I0214 22:01:27.092138  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.092150  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:27.092158  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:27.092219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:27.128290  296043 cri.go:89] found id: ""
	I0214 22:01:27.128323  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.128336  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:27.128344  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:27.128413  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:27.166086  296043 cri.go:89] found id: ""
	I0214 22:01:27.166113  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.166121  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:27.166127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:27.166174  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:27.198082  296043 cri.go:89] found id: ""
	I0214 22:01:27.198114  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.198126  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:27.198133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:27.198196  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:27.229133  296043 cri.go:89] found id: ""
	I0214 22:01:27.229167  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.229182  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:27.229190  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:27.229253  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:27.267454  296043 cri.go:89] found id: ""
	I0214 22:01:27.267483  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.267495  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:27.267504  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:27.267570  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:27.306235  296043 cri.go:89] found id: ""
	I0214 22:01:27.306265  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.306277  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:27.306289  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:27.306368  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:27.337862  296043 cri.go:89] found id: ""
	I0214 22:01:27.337894  296043 logs.go:282] 0 containers: []
	W0214 22:01:27.337905  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:27.337916  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:27.337928  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:27.384978  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:27.385007  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:27.398968  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:27.398999  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:27.468335  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:27.468363  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:27.468379  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:27.549329  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:27.549363  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:30.097135  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:30.110653  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:30.110740  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:30.148484  296043 cri.go:89] found id: ""
	I0214 22:01:30.148518  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.148530  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:30.148538  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:30.148611  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:30.183761  296043 cri.go:89] found id: ""
	I0214 22:01:30.183791  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.183802  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:30.183809  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:30.183866  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:30.216232  296043 cri.go:89] found id: ""
	I0214 22:01:30.216260  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.216271  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:30.216278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:30.216346  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:30.248173  296043 cri.go:89] found id: ""
	I0214 22:01:30.248199  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.248210  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:30.248217  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:30.248281  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:30.283288  296043 cri.go:89] found id: ""
	I0214 22:01:30.283318  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.283329  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:30.283350  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:30.283402  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:30.324270  296043 cri.go:89] found id: ""
	I0214 22:01:30.324297  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.324308  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:30.324317  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:30.324373  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:30.360122  296043 cri.go:89] found id: ""
	I0214 22:01:30.360146  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.360154  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:30.360159  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:30.360207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:30.394546  296043 cri.go:89] found id: ""
	I0214 22:01:30.394571  296043 logs.go:282] 0 containers: []
	W0214 22:01:30.394580  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:30.394594  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:30.394613  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:30.449231  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:30.449258  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:30.463475  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:30.463499  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:30.536719  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:30.536746  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:30.536762  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:30.619446  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:30.619484  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:01:30.438589  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:32.924767  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:33.159018  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:33.176759  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:33.176842  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:33.216502  296043 cri.go:89] found id: ""
	I0214 22:01:33.216527  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.216536  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:33.216542  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:33.216597  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:33.254772  296043 cri.go:89] found id: ""
	I0214 22:01:33.254799  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.254810  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:33.254817  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:33.254878  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:33.287687  296043 cri.go:89] found id: ""
	I0214 22:01:33.287713  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.287722  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:33.287728  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:33.287790  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:33.319969  296043 cri.go:89] found id: ""
	I0214 22:01:33.319990  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.319997  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:33.320002  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:33.320046  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:33.352720  296043 cri.go:89] found id: ""
	I0214 22:01:33.352740  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.352747  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:33.352752  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:33.352807  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:33.390638  296043 cri.go:89] found id: ""
	I0214 22:01:33.390662  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.390671  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:33.390678  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:33.390730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:33.425935  296043 cri.go:89] found id: ""
	I0214 22:01:33.425954  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.425962  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:33.425967  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:33.426012  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:33.459671  296043 cri.go:89] found id: ""
	I0214 22:01:33.459695  296043 logs.go:282] 0 containers: []
	W0214 22:01:33.459705  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:33.459716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:33.459730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:33.535469  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:33.535493  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:33.570473  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:33.570501  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:33.619720  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:33.619745  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:33.631829  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:33.631850  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:33.701637  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.202577  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:36.216700  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:36.216761  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:36.250764  296043 cri.go:89] found id: ""
	I0214 22:01:36.250789  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.250798  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:36.250804  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:36.250853  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:36.284811  296043 cri.go:89] found id: ""
	I0214 22:01:36.284838  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.284850  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:36.284857  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:36.284916  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:36.321197  296043 cri.go:89] found id: ""
	I0214 22:01:36.321219  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.321227  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:36.321235  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:36.321277  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:36.354869  296043 cri.go:89] found id: ""
	I0214 22:01:36.354896  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.354907  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:36.354915  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:36.354967  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:36.393688  296043 cri.go:89] found id: ""
	I0214 22:01:36.393712  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.393722  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:36.393730  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:36.393781  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:36.427985  296043 cri.go:89] found id: ""
	I0214 22:01:36.428006  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.428015  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:36.428023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:36.428076  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:36.458367  296043 cri.go:89] found id: ""
	I0214 22:01:36.458386  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.458393  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:36.458398  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:36.458446  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:36.489038  296043 cri.go:89] found id: ""
	I0214 22:01:36.489061  296043 logs.go:282] 0 containers: []
	W0214 22:01:36.489069  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:36.489080  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:36.489093  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:36.526950  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:36.526971  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:36.577258  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:36.577293  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:36.589545  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:36.589567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:36.658634  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:36.658656  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:36.658674  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0214 22:01:35.400875  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	W0214 22:01:37.900278  304371 pod_ready.go:104] pod "coredns-668d6bf9bc-m2ggw" is not "Ready", error: <nil>
	I0214 22:01:38.401005  304371 pod_ready.go:94] pod "coredns-668d6bf9bc-m2ggw" is "Ready"
	I0214 22:01:38.401031  304371 pod_ready.go:86] duration metric: took 10.005896118s for pod "coredns-668d6bf9bc-m2ggw" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.403160  304371 pod_ready.go:83] waiting for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.407295  304371 pod_ready.go:94] pod "etcd-bridge-266997" is "Ready"
	I0214 22:01:38.407320  304371 pod_ready.go:86] duration metric: took 4.131989ms for pod "etcd-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.409214  304371 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.413019  304371 pod_ready.go:94] pod "kube-apiserver-bridge-266997" is "Ready"
	I0214 22:01:38.413047  304371 pod_ready.go:86] duration metric: took 3.813497ms for pod "kube-apiserver-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.414707  304371 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.598300  304371 pod_ready.go:94] pod "kube-controller-manager-bridge-266997" is "Ready"
	I0214 22:01:38.598321  304371 pod_ready.go:86] duration metric: took 183.594312ms for pod "kube-controller-manager-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:38.799339  304371 pod_ready.go:83] waiting for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.198982  304371 pod_ready.go:94] pod "kube-proxy-xdwmc" is "Ready"
	I0214 22:01:39.199006  304371 pod_ready.go:86] duration metric: took 399.648451ms for pod "kube-proxy-xdwmc" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.400069  304371 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800157  304371 pod_ready.go:94] pod "kube-scheduler-bridge-266997" is "Ready"
	I0214 22:01:39.800184  304371 pod_ready.go:86] duration metric: took 400.072932ms for pod "kube-scheduler-bridge-266997" in "kube-system" namespace to be "Ready" or be gone ...
	I0214 22:01:39.800195  304371 pod_ready.go:40] duration metric: took 11.408545307s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0214 22:01:39.844662  304371 start.go:607] kubectl: 1.32.2, cluster: 1.32.1 (minor skew: 0)
	I0214 22:01:39.846593  304371 out.go:177] * Done! kubectl is now configured to use "bridge-266997" cluster and "default" namespace by default
	I0214 22:01:39.231339  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:39.244717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:39.244765  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:39.277734  296043 cri.go:89] found id: ""
	I0214 22:01:39.277756  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.277766  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:39.277773  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:39.277836  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:39.309896  296043 cri.go:89] found id: ""
	I0214 22:01:39.309916  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.309923  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:39.309931  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:39.309979  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:39.342579  296043 cri.go:89] found id: ""
	I0214 22:01:39.342608  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.342619  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:39.342637  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:39.342686  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:39.378083  296043 cri.go:89] found id: ""
	I0214 22:01:39.378112  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.378124  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:39.378134  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:39.378192  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:39.414803  296043 cri.go:89] found id: ""
	I0214 22:01:39.414828  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.414842  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:39.414850  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:39.414904  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:39.449659  296043 cri.go:89] found id: ""
	I0214 22:01:39.449690  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.449702  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:39.449711  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:39.449778  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:39.486261  296043 cri.go:89] found id: ""
	I0214 22:01:39.486288  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.486300  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:39.486308  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:39.486371  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:39.518224  296043 cri.go:89] found id: ""
	I0214 22:01:39.518245  296043 logs.go:282] 0 containers: []
	W0214 22:01:39.518253  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:39.518264  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:39.518277  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:39.598112  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:39.598145  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:39.634704  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:39.634727  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:39.685193  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:39.685217  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:39.697332  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:39.697355  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:39.773514  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.273720  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:42.290415  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:42.290491  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:42.329509  296043 cri.go:89] found id: ""
	I0214 22:01:42.329539  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.329549  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:42.329556  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:42.329616  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:42.366218  296043 cri.go:89] found id: ""
	I0214 22:01:42.366247  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.366259  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:42.366267  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:42.366324  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:42.404603  296043 cri.go:89] found id: ""
	I0214 22:01:42.404627  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.404634  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:42.404641  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:42.404691  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:42.437980  296043 cri.go:89] found id: ""
	I0214 22:01:42.438008  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.438017  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:42.438023  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:42.438072  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:42.470475  296043 cri.go:89] found id: ""
	I0214 22:01:42.470505  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.470517  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:42.470526  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:42.470592  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:42.503557  296043 cri.go:89] found id: ""
	I0214 22:01:42.503593  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.503606  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:42.503614  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:42.503681  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:42.537499  296043 cri.go:89] found id: ""
	I0214 22:01:42.537549  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.537559  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:42.537568  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:42.537629  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:42.581710  296043 cri.go:89] found id: ""
	I0214 22:01:42.581740  296043 logs.go:282] 0 containers: []
	W0214 22:01:42.581752  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:42.581765  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:42.581785  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:42.594891  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:42.594920  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:42.675186  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:42.675207  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:42.675221  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:42.762000  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:42.762033  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:42.813591  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:42.813644  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.368276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:45.383477  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:45.383541  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:45.419199  296043 cri.go:89] found id: ""
	I0214 22:01:45.419226  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.419235  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:45.419242  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:45.419286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:45.457708  296043 cri.go:89] found id: ""
	I0214 22:01:45.457740  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.457752  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:45.457761  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:45.457831  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:45.497110  296043 cri.go:89] found id: ""
	I0214 22:01:45.497138  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.497146  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:45.497154  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:45.497220  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:45.534294  296043 cri.go:89] found id: ""
	I0214 22:01:45.534318  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.534326  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:45.534333  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:45.534392  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:45.575462  296043 cri.go:89] found id: ""
	I0214 22:01:45.575492  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.575504  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:45.575513  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:45.575573  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:45.615590  296043 cri.go:89] found id: ""
	I0214 22:01:45.615620  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.615631  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:45.615639  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:45.615694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:45.655779  296043 cri.go:89] found id: ""
	I0214 22:01:45.655813  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.655826  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:45.655834  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:45.655903  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:45.691350  296043 cri.go:89] found id: ""
	I0214 22:01:45.691376  296043 logs.go:282] 0 containers: []
	W0214 22:01:45.691386  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:45.691395  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:45.691407  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:45.749784  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:45.749833  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:45.764193  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:45.764225  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:45.836887  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:45.836914  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:45.836930  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:45.943944  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:45.943974  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.486718  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:48.500667  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:48.500730  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:48.539749  296043 cri.go:89] found id: ""
	I0214 22:01:48.539775  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.539785  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:48.539794  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:48.539846  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:48.576675  296043 cri.go:89] found id: ""
	I0214 22:01:48.576703  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.576714  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:48.576723  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:48.576776  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:48.608593  296043 cri.go:89] found id: ""
	I0214 22:01:48.608618  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.608627  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:48.608634  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:48.608684  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:48.644181  296043 cri.go:89] found id: ""
	I0214 22:01:48.644210  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.644221  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:48.644228  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:48.644280  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:48.681188  296043 cri.go:89] found id: ""
	I0214 22:01:48.681214  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.681224  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:48.681232  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:48.681286  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:48.719817  296043 cri.go:89] found id: ""
	I0214 22:01:48.719847  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.719857  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:48.719865  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:48.719922  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:48.756080  296043 cri.go:89] found id: ""
	I0214 22:01:48.756107  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.756119  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:48.756127  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:48.756188  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:48.796664  296043 cri.go:89] found id: ""
	I0214 22:01:48.796692  296043 logs.go:282] 0 containers: []
	W0214 22:01:48.796703  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:48.796716  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:48.796730  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:48.877633  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:48.877660  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:48.924693  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:48.924726  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:48.980014  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:48.980045  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:48.993129  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:48.993153  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:49.067409  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:51.568106  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:51.583193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:51.583254  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:51.620026  296043 cri.go:89] found id: ""
	I0214 22:01:51.620050  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.620058  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:51.620063  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:51.620120  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:51.654068  296043 cri.go:89] found id: ""
	I0214 22:01:51.654103  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.654114  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:51.654122  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:51.654176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:51.689022  296043 cri.go:89] found id: ""
	I0214 22:01:51.689047  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.689055  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:51.689062  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:51.689118  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:51.725479  296043 cri.go:89] found id: ""
	I0214 22:01:51.725503  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.725513  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:51.725524  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:51.725576  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:51.761617  296043 cri.go:89] found id: ""
	I0214 22:01:51.761644  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.761653  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:51.761660  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:51.761719  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:51.802942  296043 cri.go:89] found id: ""
	I0214 22:01:51.802963  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.802972  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:51.802979  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:51.803027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:51.843214  296043 cri.go:89] found id: ""
	I0214 22:01:51.843242  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.843252  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:51.843264  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:51.843316  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:51.910513  296043 cri.go:89] found id: ""
	I0214 22:01:51.910550  296043 logs.go:282] 0 containers: []
	W0214 22:01:51.910562  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:51.910576  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:51.910594  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:51.923639  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:51.923676  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:52.014337  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:52.014366  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:52.014384  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:52.106586  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:52.106617  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:52.154349  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:52.154376  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:54.715843  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:54.729644  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:54.729694  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:54.766181  296043 cri.go:89] found id: ""
	I0214 22:01:54.766200  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.766210  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:54.766216  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:54.766276  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:54.808010  296043 cri.go:89] found id: ""
	I0214 22:01:54.808039  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.808050  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:54.808064  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:54.808130  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:54.856672  296043 cri.go:89] found id: ""
	I0214 22:01:54.856693  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.856711  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:54.856717  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:54.856762  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:54.906801  296043 cri.go:89] found id: ""
	I0214 22:01:54.906820  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.906827  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:54.906833  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:54.906873  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:54.951444  296043 cri.go:89] found id: ""
	I0214 22:01:54.951467  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.951477  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:54.951485  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:54.951539  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:54.993431  296043 cri.go:89] found id: ""
	I0214 22:01:54.993457  296043 logs.go:282] 0 containers: []
	W0214 22:01:54.993468  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:54.993476  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:54.993520  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:55.040664  296043 cri.go:89] found id: ""
	I0214 22:01:55.040714  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.040726  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:55.040735  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:55.040793  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:55.080280  296043 cri.go:89] found id: ""
	I0214 22:01:55.080309  296043 logs.go:282] 0 containers: []
	W0214 22:01:55.080317  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:55.080327  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:55.080342  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:55.141974  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:55.142012  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:55.159407  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:55.159436  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:55.238973  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:55.238998  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:55.239010  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:01:55.326876  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:55.326907  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:57.883816  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:01:57.898210  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:01:57.898270  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:01:57.933120  296043 cri.go:89] found id: ""
	I0214 22:01:57.933146  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.933155  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:01:57.933163  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:01:57.933219  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:01:57.968047  296043 cri.go:89] found id: ""
	I0214 22:01:57.968072  296043 logs.go:282] 0 containers: []
	W0214 22:01:57.968089  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:01:57.968096  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:01:57.968150  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:01:58.007167  296043 cri.go:89] found id: ""
	I0214 22:01:58.007194  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.007205  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:01:58.007213  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:01:58.007263  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:01:58.044221  296043 cri.go:89] found id: ""
	I0214 22:01:58.044249  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.044259  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:01:58.044270  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:01:58.044322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:01:58.079197  296043 cri.go:89] found id: ""
	I0214 22:01:58.079226  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.079237  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:01:58.079246  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:01:58.079308  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:01:58.115726  296043 cri.go:89] found id: ""
	I0214 22:01:58.115757  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.115768  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:01:58.115779  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:01:58.115833  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:01:58.151192  296043 cri.go:89] found id: ""
	I0214 22:01:58.151218  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.151226  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:01:58.151231  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:01:58.151279  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:01:58.186512  296043 cri.go:89] found id: ""
	I0214 22:01:58.186531  296043 logs.go:282] 0 containers: []
	W0214 22:01:58.186539  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:01:58.186548  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:01:58.186559  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:01:58.225500  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:01:58.225528  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:01:58.273842  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:01:58.273869  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:01:58.297373  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:01:58.297401  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:01:58.403111  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:01:58.403131  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:01:58.403155  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:00.996658  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:01.013323  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:01.013388  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:01.054606  296043 cri.go:89] found id: ""
	I0214 22:02:01.054647  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.054659  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:01.054667  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:01.054729  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:01.091830  296043 cri.go:89] found id: ""
	I0214 22:02:01.091860  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.091870  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:01.091878  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:01.091933  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:01.127100  296043 cri.go:89] found id: ""
	I0214 22:02:01.127126  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.127133  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:01.127139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:01.127176  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:01.160268  296043 cri.go:89] found id: ""
	I0214 22:02:01.160291  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.160298  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:01.160304  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:01.160354  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:01.192244  296043 cri.go:89] found id: ""
	I0214 22:02:01.192277  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.192290  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:01.192301  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:01.192372  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:01.226746  296043 cri.go:89] found id: ""
	I0214 22:02:01.226777  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.226787  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:01.226797  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:01.226848  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:01.264235  296043 cri.go:89] found id: ""
	I0214 22:02:01.264257  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.264266  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:01.264274  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:01.264325  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:01.299082  296043 cri.go:89] found id: ""
	I0214 22:02:01.299107  296043 logs.go:282] 0 containers: []
	W0214 22:02:01.299119  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:01.299137  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:01.299152  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:01.374067  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:01.374087  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:01.374100  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:01.466814  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:01.466842  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:01.508566  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:01.508591  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:01.565286  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:01.565318  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.079276  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:04.098100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:04.098168  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:04.148307  296043 cri.go:89] found id: ""
	I0214 22:02:04.148338  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.148347  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:04.148353  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:04.148401  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:04.182456  296043 cri.go:89] found id: ""
	I0214 22:02:04.182483  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.182493  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:04.182500  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:04.182548  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:04.222072  296043 cri.go:89] found id: ""
	I0214 22:02:04.222099  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.222107  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:04.222112  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:04.222155  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:04.255053  296043 cri.go:89] found id: ""
	I0214 22:02:04.255082  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.255092  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:04.255100  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:04.255154  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:04.293951  296043 cri.go:89] found id: ""
	I0214 22:02:04.293982  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.293991  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:04.293998  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:04.294051  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:04.334092  296043 cri.go:89] found id: ""
	I0214 22:02:04.334115  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.334123  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:04.334130  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:04.334179  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:04.366129  296043 cri.go:89] found id: ""
	I0214 22:02:04.366148  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.366160  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:04.366166  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:04.366207  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:04.398508  296043 cri.go:89] found id: ""
	I0214 22:02:04.398532  296043 logs.go:282] 0 containers: []
	W0214 22:02:04.398541  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:04.398554  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:04.398567  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:04.446518  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:04.446547  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:04.459347  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:04.459368  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:04.535181  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:04.535198  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:04.535212  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:04.608858  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:04.608891  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:07.150996  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:07.164414  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:07.164466  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:07.198549  296043 cri.go:89] found id: ""
	I0214 22:02:07.198571  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.198579  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:07.198585  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:07.198644  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:07.231429  296043 cri.go:89] found id: ""
	I0214 22:02:07.231454  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.231465  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:07.231472  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:07.231527  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:07.262244  296043 cri.go:89] found id: ""
	I0214 22:02:07.262266  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.262273  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:07.262278  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:07.262322  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:07.292654  296043 cri.go:89] found id: ""
	I0214 22:02:07.292670  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.292677  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:07.292686  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:07.292731  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:07.325893  296043 cri.go:89] found id: ""
	I0214 22:02:07.325911  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.325918  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:07.325923  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:07.325961  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:07.358776  296043 cri.go:89] found id: ""
	I0214 22:02:07.358799  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.358806  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:07.358811  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:07.358855  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:07.392029  296043 cri.go:89] found id: ""
	I0214 22:02:07.392052  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.392062  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:07.392073  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:07.392132  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:07.423080  296043 cri.go:89] found id: ""
	I0214 22:02:07.423105  296043 logs.go:282] 0 containers: []
	W0214 22:02:07.423115  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:07.423128  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:07.423142  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:07.473625  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:07.473649  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:07.486487  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:07.486510  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:07.550364  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:07.550387  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:07.550400  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:07.620727  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:07.620750  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:10.158575  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:10.171139  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:02:10.171189  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:02:10.203796  296043 cri.go:89] found id: ""
	I0214 22:02:10.203825  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.203837  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:02:10.203847  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:02:10.203905  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:02:10.235261  296043 cri.go:89] found id: ""
	I0214 22:02:10.235279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.235287  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:02:10.235292  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:02:10.235331  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:02:10.267017  296043 cri.go:89] found id: ""
	I0214 22:02:10.267037  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.267044  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:02:10.267052  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:02:10.267110  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:02:10.298100  296043 cri.go:89] found id: ""
	I0214 22:02:10.298121  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.298127  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:02:10.298133  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:02:10.298173  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:02:10.330163  296043 cri.go:89] found id: ""
	I0214 22:02:10.330189  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.330196  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:02:10.330205  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:02:10.330257  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:02:10.363253  296043 cri.go:89] found id: ""
	I0214 22:02:10.363279  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.363287  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:02:10.363293  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:02:10.363345  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:02:10.393052  296043 cri.go:89] found id: ""
	I0214 22:02:10.393073  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.393081  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:02:10.393086  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:02:10.393124  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:02:10.423261  296043 cri.go:89] found id: ""
	I0214 22:02:10.423284  296043 logs.go:282] 0 containers: []
	W0214 22:02:10.423292  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:02:10.423302  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:02:10.423314  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:02:10.474817  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:02:10.474839  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:02:10.487089  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:02:10.487117  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:02:10.552798  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:02:10.552818  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:02:10.552827  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:02:10.633678  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:02:10.633700  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0214 22:02:13.175779  296043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 22:02:13.188862  296043 kubeadm.go:593] duration metric: took 4m4.534890262s to restartPrimaryControlPlane
	W0214 22:02:13.188929  296043 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0214 22:02:13.188953  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:02:14.903694  296043 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.714713868s)
	I0214 22:02:14.903774  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:02:14.917520  296043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 22:02:14.927114  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:02:14.936531  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:02:14.936548  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:02:14.936593  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:02:14.945506  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:02:14.945543  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:02:14.954573  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:02:14.963268  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:02:14.963308  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:02:14.972385  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.981144  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:02:14.981190  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:02:14.990181  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:02:14.998739  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:02:14.998781  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:02:15.007880  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:02:15.079968  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:02:15.080063  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:02:15.227132  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:02:15.227264  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:02:15.227363  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:02:15.399613  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:02:15.401413  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:02:15.401514  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:02:15.401584  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:02:15.401699  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:02:15.401787  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:02:15.401887  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:02:15.403287  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:02:15.403395  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:02:15.403485  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:02:15.403584  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:02:15.403691  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:02:15.403760  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:02:15.403854  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:02:15.575946  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:02:15.646531  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:02:16.039563  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:02:16.210385  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:02:16.225322  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:02:16.226388  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:02:16.226445  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:02:16.354308  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:02:16.356102  296043 out.go:235]   - Booting up control plane ...
	I0214 22:02:16.356211  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:02:16.360283  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:02:16.361731  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:02:16.362515  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:02:16.373807  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:02:56.375481  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:02:56.376996  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:02:56.377215  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:01.377539  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:01.377722  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:11.378071  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:11.378255  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:03:31.379013  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:03:31.379253  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.380898  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:11.381134  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:11.381161  296043 kubeadm.go:310] 
	I0214 22:04:11.381223  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:04:11.381276  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:04:11.381287  296043 kubeadm.go:310] 
	I0214 22:04:11.381330  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:04:11.381386  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:04:11.381508  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:04:11.381517  296043 kubeadm.go:310] 
	I0214 22:04:11.381610  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:04:11.381661  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:04:11.381706  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:04:11.381713  296043 kubeadm.go:310] 
	I0214 22:04:11.381844  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:04:11.381962  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:04:11.381985  296043 kubeadm.go:310] 
	I0214 22:04:11.382159  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:04:11.382269  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:04:11.382378  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:04:11.382478  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:04:11.382488  296043 kubeadm.go:310] 
	I0214 22:04:11.383608  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:04:11.383712  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:04:11.383805  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0214 22:04:11.383962  296043 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0214 22:04:11.384029  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0214 22:04:11.847932  296043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 22:04:11.862250  296043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 22:04:11.872076  296043 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 22:04:11.872096  296043 kubeadm.go:157] found existing configuration files:
	
	I0214 22:04:11.872141  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0214 22:04:11.881248  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0214 22:04:11.881299  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0214 22:04:11.890591  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0214 22:04:11.899561  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0214 22:04:11.899609  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0214 22:04:11.908818  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.917642  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0214 22:04:11.917688  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0214 22:04:11.926938  296043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0214 22:04:11.936007  296043 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0214 22:04:11.936053  296043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0214 22:04:11.945314  296043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0214 22:04:12.015411  296043 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0214 22:04:12.015466  296043 kubeadm.go:310] [preflight] Running pre-flight checks
	I0214 22:04:12.151668  296043 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 22:04:12.151844  296043 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 22:04:12.151988  296043 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 22:04:12.322327  296043 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 22:04:12.324344  296043 out.go:235]   - Generating certificates and keys ...
	I0214 22:04:12.324451  296043 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0214 22:04:12.324530  296043 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0214 22:04:12.324659  296043 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0214 22:04:12.324761  296043 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0214 22:04:12.324855  296043 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0214 22:04:12.324934  296043 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0214 22:04:12.325109  296043 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0214 22:04:12.325566  296043 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0214 22:04:12.325866  296043 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0214 22:04:12.326334  296043 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0214 22:04:12.326391  296043 kubeadm.go:310] [certs] Using the existing "sa" key
	I0214 22:04:12.326453  296043 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 22:04:12.468450  296043 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 22:04:12.741068  296043 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 22:04:12.905628  296043 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 22:04:13.075487  296043 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 22:04:13.093105  296043 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 22:04:13.093840  296043 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 22:04:13.093897  296043 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0214 22:04:13.225868  296043 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 22:04:13.227602  296043 out.go:235]   - Booting up control plane ...
	I0214 22:04:13.227715  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 22:04:13.235626  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 22:04:13.238592  296043 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 22:04:13.239495  296043 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 22:04:13.246539  296043 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 22:04:53.249274  296043 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0214 22:04:53.249602  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:53.249764  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:04:58.250244  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:04:58.250486  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:08.251032  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:08.251247  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:05:28.253223  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:05:28.253527  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252450  296043 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0214 22:06:08.252752  296043 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0214 22:06:08.252786  296043 kubeadm.go:310] 
	I0214 22:06:08.252841  296043 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0214 22:06:08.252891  296043 kubeadm.go:310] 		timed out waiting for the condition
	I0214 22:06:08.252909  296043 kubeadm.go:310] 
	I0214 22:06:08.252957  296043 kubeadm.go:310] 	This error is likely caused by:
	I0214 22:06:08.253010  296043 kubeadm.go:310] 		- The kubelet is not running
	I0214 22:06:08.253150  296043 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0214 22:06:08.253160  296043 kubeadm.go:310] 
	I0214 22:06:08.253287  296043 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0214 22:06:08.253332  296043 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0214 22:06:08.253372  296043 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0214 22:06:08.253403  296043 kubeadm.go:310] 
	I0214 22:06:08.253569  296043 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0214 22:06:08.253692  296043 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0214 22:06:08.253701  296043 kubeadm.go:310] 
	I0214 22:06:08.253861  296043 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0214 22:06:08.253990  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0214 22:06:08.254095  296043 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0214 22:06:08.254195  296043 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0214 22:06:08.254206  296043 kubeadm.go:310] 
	I0214 22:06:08.254491  296043 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 22:06:08.254637  296043 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0214 22:06:08.254748  296043 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0214 22:06:08.254848  296043 kubeadm.go:394] duration metric: took 7m59.662371118s to StartCluster
	I0214 22:06:08.254965  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0214 22:06:08.255027  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0214 22:06:08.298673  296043 cri.go:89] found id: ""
	I0214 22:06:08.298694  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.298702  296043 logs.go:284] No container was found matching "kube-apiserver"
	I0214 22:06:08.298709  296043 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0214 22:06:08.298777  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0214 22:06:08.329697  296043 cri.go:89] found id: ""
	I0214 22:06:08.329717  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.329724  296043 logs.go:284] No container was found matching "etcd"
	I0214 22:06:08.329729  296043 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0214 22:06:08.329779  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0214 22:06:08.360276  296043 cri.go:89] found id: ""
	I0214 22:06:08.360296  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.360304  296043 logs.go:284] No container was found matching "coredns"
	I0214 22:06:08.360310  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0214 22:06:08.360370  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0214 22:06:08.391153  296043 cri.go:89] found id: ""
	I0214 22:06:08.391180  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.391188  296043 logs.go:284] No container was found matching "kube-scheduler"
	I0214 22:06:08.391193  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0214 22:06:08.391244  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0214 22:06:08.421880  296043 cri.go:89] found id: ""
	I0214 22:06:08.421907  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.421917  296043 logs.go:284] No container was found matching "kube-proxy"
	I0214 22:06:08.421924  296043 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0214 22:06:08.421974  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0214 22:06:08.453558  296043 cri.go:89] found id: ""
	I0214 22:06:08.453578  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.453587  296043 logs.go:284] No container was found matching "kube-controller-manager"
	I0214 22:06:08.453594  296043 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0214 22:06:08.453641  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0214 22:06:08.495718  296043 cri.go:89] found id: ""
	I0214 22:06:08.495750  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.495761  296043 logs.go:284] No container was found matching "kindnet"
	I0214 22:06:08.495772  296043 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0214 22:06:08.495845  296043 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0214 22:06:08.542115  296043 cri.go:89] found id: ""
	I0214 22:06:08.542141  296043 logs.go:282] 0 containers: []
	W0214 22:06:08.542152  296043 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0214 22:06:08.542165  296043 logs.go:123] Gathering logs for kubelet ...
	I0214 22:06:08.542180  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0214 22:06:08.605825  296043 logs.go:123] Gathering logs for dmesg ...
	I0214 22:06:08.605851  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0214 22:06:08.621228  296043 logs.go:123] Gathering logs for describe nodes ...
	I0214 22:06:08.621251  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0214 22:06:08.696999  296043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0214 22:06:08.697025  296043 logs.go:123] Gathering logs for CRI-O ...
	I0214 22:06:08.697050  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0214 22:06:08.796690  296043 logs.go:123] Gathering logs for container status ...
	I0214 22:06:08.796716  296043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0214 22:06:08.834010  296043 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0214 22:06:08.834068  296043 out.go:270] * 
	W0214 22:06:08.834153  296043 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.834166  296043 out.go:270] * 
	W0214 22:06:08.835011  296043 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0214 22:06:08.838512  296043 out.go:201] 
	W0214 22:06:08.839577  296043 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0214 22:06:08.839631  296043 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0214 22:06:08.839655  296043 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0214 22:06:08.840885  296043 out.go:201] 
	
	
	==> CRI-O <==
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.505774441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571591505725959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c1825bd-55f2-4e4d-a9d5-90ea0fe95c19 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.506397884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b97a6daa-2ecc-4564-9db5-c5eae811cd3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.506442520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b97a6daa-2ecc-4564-9db5-c5eae811cd3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.506494977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b97a6daa-2ecc-4564-9db5-c5eae811cd3b name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.538556690Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dff1604f-569e-45a2-b4cb-ae8821b84ab5 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.538668839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dff1604f-569e-45a2-b4cb-ae8821b84ab5 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.539766894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a15b323-eb2a-4551-ba19-d84b035db6ba name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.540343388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571591540286184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a15b323-eb2a-4551-ba19-d84b035db6ba name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.540819579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58c959a0-77d8-4df9-bf2c-1a410e6a4075 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.540880454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58c959a0-77d8-4df9-bf2c-1a410e6a4075 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.540922579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58c959a0-77d8-4df9-bf2c-1a410e6a4075 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.569258613Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ffc09e1-2dec-48d3-ac73-dca5d5acd571 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.569309811Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ffc09e1-2dec-48d3-ac73-dca5d5acd571 name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.570746644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5edcd2dd-7db3-4504-acf1-71271bf8620f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.571118002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571591571092937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5edcd2dd-7db3-4504-acf1-71271bf8620f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.571611192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59665a09-c32a-4bed-896b-673583b0eeba name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.571660409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59665a09-c32a-4bed-896b-673583b0eeba name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.571687973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=59665a09-c32a-4bed-896b-673583b0eeba name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.602078245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf74ecad-0280-4ec2-97c1-54444902da2f name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.602212764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf74ecad-0280-4ec2-97c1-54444902da2f name=/runtime.v1.RuntimeService/Version
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.603795920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f25389b-223c-49a5-b711-cc4d58ca5982 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.604299319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739571591604271851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f25389b-223c-49a5-b711-cc4d58ca5982 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.604894803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de68b601-b6e9-4fd4-89a6-f6f94911a3c7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.604940653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de68b601-b6e9-4fd4-89a6-f6f94911a3c7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 14 22:19:51 old-k8s-version-201745 crio[638]: time="2025-02-14 22:19:51.604979133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=de68b601-b6e9-4fd4-89a6-f6f94911a3c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb14 21:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.060243] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046957] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.427674] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.890736] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.894421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.931911] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.056852] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063369] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.207712] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.154341] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[Feb14 21:58] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +6.870486] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.069737] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.465278] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +9.377456] kauditd_printk_skb: 46 callbacks suppressed
	[Feb14 22:02] systemd-fstab-generator[5022]: Ignoring "noauto" option for root device
	[Feb14 22:04] systemd-fstab-generator[5313]: Ignoring "noauto" option for root device
	[  +0.064085] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:19:51 up 22 min,  0 users,  load average: 0.07, 0.02, 0.01
	Linux old-k8s-version-201745 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00066def0)
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b81ef0, 0x4f0ac20, 0xc000bb4140, 0x1, 0xc0001000c0)
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d87e0, 0xc0001000c0)
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b966a0, 0xc000ba8960)
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7042]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 14 22:19:50 old-k8s-version-201745 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 14 22:19:50 old-k8s-version-201745 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 14 22:19:50 old-k8s-version-201745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Feb 14 22:19:50 old-k8s-version-201745 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 14 22:19:50 old-k8s-version-201745 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7060]: I0214 22:19:50.850150    7060 server.go:416] Version: v1.20.0
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7060]: I0214 22:19:50.850488    7060 server.go:837] Client rotation is on, will bootstrap in background
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7060]: I0214 22:19:50.852368    7060 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7060]: I0214 22:19:50.853624    7060 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Feb 14 22:19:50 old-k8s-version-201745 kubelet[7060]: W0214 22:19:50.853628    7060 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 2 (239.594674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-201745" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (279.89s)

                                                
                                    

Test pass (270/321)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.88
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 6.53
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 1.72
22 TestOffline 60.17
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.22
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.29
27 TestAddons/Setup 129.7
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.49
35 TestAddons/parallel/Registry 17.07
37 TestAddons/parallel/InspektorGadget 11.1
38 TestAddons/parallel/MetricsServer 6.16
40 TestAddons/parallel/CSI 58.16
41 TestAddons/parallel/Headlamp 17.93
42 TestAddons/parallel/CloudSpanner 6.57
43 TestAddons/parallel/LocalPath 55.41
44 TestAddons/parallel/NvidiaDevicePlugin 6.51
45 TestAddons/parallel/Yakd 10.7
47 TestAddons/StoppedEnableDisable 91.1
48 TestCertOptions 88.98
49 TestCertExpiration 277.47
51 TestForceSystemdFlag 82.62
52 TestForceSystemdEnv 63.21
54 TestKVMDriverInstallOrUpdate 1.31
58 TestErrorSpam/setup 42.06
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.64
63 TestErrorSpam/stop 5.06
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.03
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.22
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.05
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 35.69
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.34
89 TestFunctional/parallel/ConfigCmd 0.34
90 TestFunctional/parallel/DashboardCmd 19.24
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.01
97 TestFunctional/parallel/ServiceCmdConnect 9.5
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 21.54
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.35
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.16
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
116 TestFunctional/parallel/ProfileCmd/profile_list 0.38
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
118 TestFunctional/parallel/MountCmd/any-port 8.74
119 TestFunctional/parallel/MountCmd/specific-port 1.67
120 TestFunctional/parallel/ServiceCmd/List 0.51
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
123 TestFunctional/parallel/ServiceCmd/Format 0.38
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
125 TestFunctional/parallel/ServiceCmd/URL 0.37
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 0.88
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.49
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.67
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
132 TestFunctional/parallel/ImageCommands/ImageBuild 10.65
133 TestFunctional/parallel/ImageCommands/Setup 1.47
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.63
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.1
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.94
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 187.19
161 TestMultiControlPlane/serial/DeployApp 5.34
162 TestMultiControlPlane/serial/PingHostFromPods 1.16
163 TestMultiControlPlane/serial/AddWorkerNode 50.6
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
166 TestMultiControlPlane/serial/CopyFile 12.92
167 TestMultiControlPlane/serial/StopSecondaryNode 91.29
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
169 TestMultiControlPlane/serial/RestartSecondaryNode 23.84
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 399.93
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.09
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/StopCluster 272.5
175 TestMultiControlPlane/serial/RestartCluster 118.99
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
177 TestMultiControlPlane/serial/AddSecondaryNode 69.62
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
182 TestJSONOutput/start/Command 80.79
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.7
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.62
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.36
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.19
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 85.25
214 TestMountStart/serial/StartWithMountFirst 27.3
215 TestMountStart/serial/VerifyMountFirst 0.38
216 TestMountStart/serial/StartWithMountSecond 25.87
217 TestMountStart/serial/VerifyMountSecond 0.37
218 TestMountStart/serial/DeleteFirst 0.57
219 TestMountStart/serial/VerifyMountPostDelete 0.38
220 TestMountStart/serial/Stop 1.27
221 TestMountStart/serial/RestartStopped 21.73
222 TestMountStart/serial/VerifyMountPostStop 0.37
225 TestMultiNode/serial/FreshStart2Nodes 108.36
226 TestMultiNode/serial/DeployApp2Nodes 4.99
227 TestMultiNode/serial/PingHostFrom2Pods 0.76
228 TestMultiNode/serial/AddNode 48.58
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.57
231 TestMultiNode/serial/CopyFile 7.16
232 TestMultiNode/serial/StopNode 2.23
233 TestMultiNode/serial/StartAfterStop 35.57
234 TestMultiNode/serial/RestartKeepsNodes 314.54
235 TestMultiNode/serial/DeleteNode 2.5
236 TestMultiNode/serial/StopMultiNode 181.78
237 TestMultiNode/serial/RestartMultiNode 96.9
238 TestMultiNode/serial/ValidateNameConflict 43.5
245 TestScheduledStopUnix 110.76
249 TestRunningBinaryUpgrade 219.69
253 TestStoppedBinaryUpgrade/Setup 0.49
254 TestStoppedBinaryUpgrade/Upgrade 170.12
263 TestPause/serial/Start 128.48
264 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
267 TestNoKubernetes/serial/StartWithK8s 45.73
269 TestNoKubernetes/serial/StartWithStopK8s 10.15
277 TestNetworkPlugins/group/false 3.01
281 TestNoKubernetes/serial/Start 23.76
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
283 TestNoKubernetes/serial/ProfileList 0.9
284 TestNoKubernetes/serial/Stop 1.29
285 TestNoKubernetes/serial/StartNoArgs 65.37
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
290 TestStartStop/group/no-preload/serial/FirstStart 108.9
292 TestStartStop/group/embed-certs/serial/FirstStart 94.92
293 TestStartStop/group/no-preload/serial/DeployApp 10.27
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
295 TestStartStop/group/no-preload/serial/Stop 90.7
296 TestStartStop/group/embed-certs/serial/DeployApp 9.27
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
298 TestStartStop/group/embed-certs/serial/Stop 90.88
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.1
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
302 TestStartStop/group/no-preload/serial/SecondStart 58.44
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/embed-certs/serial/SecondStart 50.88
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.95
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.07
313 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
315 TestStartStop/group/no-preload/serial/Pause 2.66
316 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
317 TestStartStop/group/embed-certs/serial/Pause 2.88
319 TestStartStop/group/newest-cni/serial/FirstStart 48.94
320 TestNetworkPlugins/group/auto/Start 113.58
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
322 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 66.53
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
325 TestStartStop/group/newest-cni/serial/Stop 8.4
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
327 TestStartStop/group/newest-cni/serial/SecondStart 51.21
328 TestStartStop/group/old-k8s-version/serial/Stop 4.32
329 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
335 TestStartStop/group/newest-cni/serial/Pause 2.33
336 TestNetworkPlugins/group/kindnet/Start 67.55
337 TestNetworkPlugins/group/auto/KubeletFlags 0.23
338 TestNetworkPlugins/group/auto/NetCatPod 12.29
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.46
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.54
342 TestNetworkPlugins/group/calico/Start 89.09
343 TestNetworkPlugins/group/auto/DNS 0.14
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestNetworkPlugins/group/auto/HairPin 0.13
346 TestNetworkPlugins/group/custom-flannel/Start 101.76
347 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
348 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
349 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
350 TestNetworkPlugins/group/kindnet/DNS 0.17
351 TestNetworkPlugins/group/kindnet/Localhost 0.14
352 TestNetworkPlugins/group/kindnet/HairPin 0.13
353 TestNetworkPlugins/group/enable-default-cni/Start 57.78
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.24
356 TestNetworkPlugins/group/calico/NetCatPod 11.26
357 TestNetworkPlugins/group/calico/DNS 0.15
358 TestNetworkPlugins/group/calico/Localhost 0.13
359 TestNetworkPlugins/group/calico/HairPin 0.14
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.31
362 TestNetworkPlugins/group/flannel/Start 66.78
363 TestNetworkPlugins/group/custom-flannel/DNS 0.2
364 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
365 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
366 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
367 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.61
368 TestNetworkPlugins/group/bridge/Start 59.91
369 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
370 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
371 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
372 TestNetworkPlugins/group/flannel/ControllerPod 6.01
373 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
374 TestNetworkPlugins/group/flannel/NetCatPod 10.22
375 TestNetworkPlugins/group/flannel/DNS 0.14
376 TestNetworkPlugins/group/flannel/Localhost 0.12
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
378 TestNetworkPlugins/group/flannel/HairPin 0.12
379 TestNetworkPlugins/group/bridge/NetCatPod 9.23
380 TestNetworkPlugins/group/bridge/DNS 0.19
381 TestNetworkPlugins/group/bridge/Localhost 0.12
382 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (7.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-068536 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-068536 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.881258343s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0214 20:44:22.216725  250783 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0214 20:44:22.216854  250783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-068536
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-068536: exit status 85 (62.300881ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-068536 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |          |
	|         | -p download-only-068536        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 20:44:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 20:44:14.378283  250794 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:44:14.378545  250794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:14.378555  250794 out.go:358] Setting ErrFile to fd 2...
	I0214 20:44:14.378560  250794 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:14.378792  250794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	W0214 20:44:14.378948  250794 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20315-243456/.minikube/config/config.json: open /home/jenkins/minikube-integration/20315-243456/.minikube/config/config.json: no such file or directory
	I0214 20:44:14.379573  250794 out.go:352] Setting JSON to true
	I0214 20:44:14.381017  250794 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5198,"bootTime":1739560656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:44:14.381124  250794 start.go:140] virtualization: kvm guest
	I0214 20:44:14.383703  250794 out.go:97] [download-only-068536] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0214 20:44:14.383807  250794 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball: no such file or directory
	I0214 20:44:14.383859  250794 notify.go:220] Checking for updates...
	I0214 20:44:14.385415  250794 out.go:169] MINIKUBE_LOCATION=20315
	I0214 20:44:14.386850  250794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:44:14.388189  250794 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:44:14.389429  250794 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:44:14.390507  250794 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0214 20:44:14.392380  250794 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 20:44:14.392560  250794 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:44:14.503001  250794 out.go:97] Using the kvm2 driver based on user configuration
	I0214 20:44:14.503026  250794 start.go:304] selected driver: kvm2
	I0214 20:44:14.503033  250794 start.go:908] validating driver "kvm2" against <nil>
	I0214 20:44:14.503485  250794 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:14.504124  250794 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 20:44:14.518913  250794 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 20:44:14.518951  250794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 20:44:14.519539  250794 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0214 20:44:14.519725  250794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 20:44:14.519767  250794 cni.go:84] Creating CNI manager for ""
	I0214 20:44:14.519829  250794 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:44:14.519848  250794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 20:44:14.519935  250794 start.go:347] cluster config:
	{Name:download-only-068536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-068536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:44:14.520135  250794 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:14.521497  250794 out.go:97] Downloading VM boot image ...
	I0214 20:44:14.521528  250794 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0214 20:44:17.670061  250794 out.go:97] Starting "download-only-068536" primary control-plane node in "download-only-068536" cluster
	I0214 20:44:17.670081  250794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 20:44:17.693789  250794 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0214 20:44:17.693813  250794 cache.go:56] Caching tarball of preloaded images
	I0214 20:44:17.693945  250794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0214 20:44:17.695170  250794 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0214 20:44:17.695183  250794 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0214 20:44:17.719232  250794 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-068536 host does not exist
	  To start a cluster, run: "minikube start -p download-only-068536"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-068536
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (6.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-168650 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-168650 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.528943095s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (6.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0214 20:44:29.069489  250783 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0214 20:44:29.069564  250783 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-168650
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-168650: exit status 85 (62.617554ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-068536 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |                     |
	|         | -p download-only-068536        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| delete  | -p download-only-068536        | download-only-068536 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC | 14 Feb 25 20:44 UTC |
	| start   | -o=json --download-only        | download-only-168650 | jenkins | v1.35.0 | 14 Feb 25 20:44 UTC |                     |
	|         | -p download-only-168650        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/14 20:44:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 20:44:22.582191  251005 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:44:22.582435  251005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:22.582445  251005 out.go:358] Setting ErrFile to fd 2...
	I0214 20:44:22.582450  251005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:44:22.582618  251005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 20:44:22.583153  251005 out.go:352] Setting JSON to true
	I0214 20:44:22.583918  251005 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5207,"bootTime":1739560656,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:44:22.584021  251005 start.go:140] virtualization: kvm guest
	I0214 20:44:22.585707  251005 out.go:97] [download-only-168650] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 20:44:22.585881  251005 notify.go:220] Checking for updates...
	I0214 20:44:22.587526  251005 out.go:169] MINIKUBE_LOCATION=20315
	I0214 20:44:22.588811  251005 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:44:22.590085  251005 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:44:22.591345  251005 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:44:22.592485  251005 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0214 20:44:22.594653  251005 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 20:44:22.594868  251005 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:44:22.625414  251005 out.go:97] Using the kvm2 driver based on user configuration
	I0214 20:44:22.625439  251005 start.go:304] selected driver: kvm2
	I0214 20:44:22.625445  251005 start.go:908] validating driver "kvm2" against <nil>
	I0214 20:44:22.625713  251005 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:22.625780  251005 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20315-243456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0214 20:44:22.639775  251005 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0214 20:44:22.639821  251005 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0214 20:44:22.640361  251005 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0214 20:44:22.640511  251005 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 20:44:22.640545  251005 cni.go:84] Creating CNI manager for ""
	I0214 20:44:22.640603  251005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0214 20:44:22.640612  251005 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 20:44:22.640683  251005 start.go:347] cluster config:
	{Name:download-only-168650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-168650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:44:22.640782  251005 iso.go:125] acquiring lock: {Name:mka34a06110b1b0e5d10d07fdcf0f95d49c057e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 20:44:22.642041  251005 out.go:97] Starting "download-only-168650" primary control-plane node in "download-only-168650" cluster
	I0214 20:44:22.642054  251005 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 20:44:22.668404  251005 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0214 20:44:22.668422  251005 cache.go:56] Caching tarball of preloaded images
	I0214 20:44:22.668542  251005 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0214 20:44:22.669905  251005 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0214 20:44:22.669918  251005 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0214 20:44:22.697807  251005 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20315-243456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-168650 host does not exist
	  To start a cluster, run: "minikube start -p download-only-168650"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-168650
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (1.72s)

                                                
                                                
=== RUN   TestBinaryMirror
I0214 20:44:29.928971  250783 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-490503 --alsologtostderr --binary-mirror http://127.0.0.1:36107 --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:314: (dbg) Done: out/minikube-linux-amd64 start --download-only -p binary-mirror-490503 --alsologtostderr --binary-mirror http://127.0.0.1:36107 --driver=kvm2  --container-runtime=crio: (1.268056625s)
helpers_test.go:175: Cleaning up "binary-mirror-490503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-490503
--- PASS: TestBinaryMirror (1.72s)

                                                
                                    
x
+
TestOffline (60.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-157976 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-157976 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (59.157929709s)
helpers_test.go:175: Cleaning up "offline-crio-157976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-157976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-157976: (1.013176637s)
--- PASS: TestOffline (60.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371781
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-371781: exit status 85 (216.892444ms)

                                                
                                                
-- stdout --
	* Profile "addons-371781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371781
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-371781: exit status 85 (285.348727ms)

                                                
                                                
-- stdout --
	* Profile "addons-371781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-371781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/Setup (129.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-371781 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-371781 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.696827481s)
--- PASS: TestAddons/Setup (129.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-371781 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-371781 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-371781 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-371781 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d279f116-7ecb-4389-b50c-dc4e1e6388ca] Pending
helpers_test.go:344: "busybox" [d279f116-7ecb-4389-b50c-dc4e1e6388ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d279f116-7ecb-4389-b50c-dc4e1e6388ca] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003207859s
addons_test.go:633: (dbg) Run:  kubectl --context addons-371781 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-371781 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-371781 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 16.428552ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-6d8q9" [112052e6-40f0-43f6-8eab-72c10cd3b9aa] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004149075s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-q4kfl" [38d75acd-d5f9-40f8-a54f-f648b3982f76] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.002364896s
addons_test.go:331: (dbg) Run:  kubectl --context addons-371781 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-371781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-371781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.686018321s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 ip
2025/02/14 20:47:16 [DEBUG] GET http://192.168.39.67:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable registry --alsologtostderr -v=1: (1.205831712s)
--- PASS: TestAddons/parallel/Registry (17.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bwrqf" [690ec988-727c-4451-b718-ed5cd570449b] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004016023s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable inspektor-gadget --alsologtostderr -v=1: (6.094842351s)
--- PASS: TestAddons/parallel/InspektorGadget (11.10s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0214 20:47:00.912098  250783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0214 20:47:00.915903  250783 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0214 20:47:00.915932  250783 kapi.go:107] duration metric: took 3.84659ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:394: metrics-server stabilized in 16.936097ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-rts29" [ebeaa3ab-84cc-437b-bd18-6c140dad9938] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004022636s
addons_test.go:402: (dbg) Run:  kubectl --context addons-371781 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable metrics-server --alsologtostderr -v=1: (1.067472484s)
--- PASS: TestAddons/parallel/MetricsServer (6.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.858443ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-371781 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-371781 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [af0d85a3-5b51-4bd4-abde-b569995fff3b] Pending
helpers_test.go:344: "task-pv-pod" [af0d85a3-5b51-4bd4-abde-b569995fff3b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [af0d85a3-5b51-4bd4-abde-b569995fff3b] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.002902939s
addons_test.go:511: (dbg) Run:  kubectl --context addons-371781 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-371781 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-371781 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-371781 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-371781 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-371781 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-371781 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [78fe9a7a-bdc3-4f37-9285-1ed4b7378123] Pending
helpers_test.go:344: "task-pv-pod-restore" [78fe9a7a-bdc3-4f37-9285-1ed4b7378123] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [78fe9a7a-bdc3-4f37-9285-1ed4b7378123] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003457936s
addons_test.go:553: (dbg) Run:  kubectl --context addons-371781 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-371781 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-371781 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.74444174s)
--- PASS: TestAddons/parallel/CSI (58.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-371781 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-bq48d" [6fa7d441-1c7d-4e80-bd22-28dd97a9c398] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-bq48d" [6fa7d441-1c7d-4e80-bd22-28dd97a9c398] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-bq48d" [6fa7d441-1c7d-4e80-bd22-28dd97a9c398] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-bq48d" [6fa7d441-1c7d-4e80-bd22-28dd97a9c398] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006579323s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable headlamp --alsologtostderr -v=1: (6.04031677s)
--- PASS: TestAddons/parallel/Headlamp (17.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-tnh2r" [120b5286-2d4b-49ea-ba27-4ec9e601fc52] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003793956s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-371781 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-371781 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [41c00e68-93e4-436d-a8d1-49bcb358627a] Pending
helpers_test.go:344: "test-local-path" [41c00e68-93e4-436d-a8d1-49bcb358627a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [41c00e68-93e4-436d-a8d1-49bcb358627a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [41c00e68-93e4-436d-a8d1-49bcb358627a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003095577s
addons_test.go:906: (dbg) Run:  kubectl --context addons-371781 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 ssh "cat /opt/local-path-provisioner/pvc-75fb9ef7-af9b-4cf9-a68e-f5a0c0cc3c43_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-371781 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-371781 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.424249379s)
--- PASS: TestAddons/parallel/LocalPath (55.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dnfkm" [9d95b55b-46ad-487c-9125-9f8b59218d7b] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003924361s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-7st8v" [ad6a3420-ab0e-4e7a-a67e-96663fdb86e0] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004874112s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-371781 addons disable yakd --alsologtostderr -v=1: (5.694733562s)
--- PASS: TestAddons/parallel/Yakd (10.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.1s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-371781
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-371781: (1m30.830089548s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-371781
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-371781
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-371781
--- PASS: TestAddons/StoppedEnableDisable (91.10s)

                                                
                                    
x
+
TestCertOptions (88.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-733237 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-733237 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m27.832099606s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-733237 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-733237 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-733237 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-733237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-733237
--- PASS: TestCertOptions (88.98s)

                                                
                                    
x
+
TestCertExpiration (277.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-191481 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-191481 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.066794962s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-191481 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-191481 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.708719016s)
helpers_test.go:175: Cleaning up "cert-expiration-191481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-191481
--- PASS: TestCertExpiration (277.47s)

                                                
                                    
x
+
TestForceSystemdFlag (82.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-203280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-203280 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.761791422s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-203280 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-203280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-203280
--- PASS: TestForceSystemdFlag (82.62s)

                                                
                                    
x
+
TestForceSystemdEnv (63.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-054462 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-054462 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.546658662s)
helpers_test.go:175: Cleaning up "force-systemd-env-054462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-054462
--- PASS: TestForceSystemdEnv (63.21s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0214 21:48:18.947511  250783 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0214 21:48:18.947656  250783 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0214 21:48:18.978953  250783 install.go:62] docker-machine-driver-kvm2: exit status 1
W0214 21:48:18.979330  250783 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0214 21:48:18.979398  250783 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1425863628/001/docker-machine-driver-kvm2
I0214 21:48:19.086081  250783 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1425863628/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840 0x5494840] Decompressors:map[bz2:0xc0006043f8 gz:0xc000604480 tar:0xc000604430 tar.bz2:0xc000604440 tar.gz:0xc000604450 tar.xz:0xc000604460 tar.zst:0xc000604470 tbz2:0xc000604440 tgz:0xc000604450 txz:0xc000604460 tzst:0xc000604470 xz:0xc000604488 zip:0xc0006044a0 zst:0xc0006044b0] Getters:map[file:0xc001cef920 http:0xc000528730 https:0xc000528780] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0214 21:48:19.086154  250783 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1425863628/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.31s)

                                                
                                    
x
+
TestErrorSpam/setup (42.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-137470 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-137470 --driver=kvm2  --container-runtime=crio
E0214 20:51:42.299368  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.305778  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.317092  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.338574  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.380044  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.461424  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.622873  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:42.944579  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:43.586791  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:44.868427  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:47.430305  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:51:52.553057  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:52:02.794495  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-137470 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-137470 --driver=kvm2  --container-runtime=crio: (42.058586802s)
--- PASS: TestErrorSpam/setup (42.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (5.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop: (2.296131378s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop: (1.317178806s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-137470 --log_dir /tmp/nospam-137470 stop: (1.447454572s)
--- PASS: TestErrorSpam/stop (5.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20315-243456/.minikube/files/etc/test/nested/copy/250783/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0214 20:52:23.276165  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:53:04.238823  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-471578 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.028467954s)
--- PASS: TestFunctional/serial/StartWithProxy (83.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0214 20:53:42.582172  250783 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-471578 --alsologtostderr -v=8: (41.219446347s)
functional_test.go:680: soft start took 41.220367262s for "functional-471578" cluster.
I0214 20:54:23.801989  250783 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (41.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-471578 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:3.1: (1.060761746s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:3.3: (1.162538926s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:latest
E0214 20:54:26.160609  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 cache add registry.k8s.io/pause:latest: (1.097923237s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-471578 /tmp/TestFunctionalserialCacheCmdcacheadd_local2640565980/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache add minikube-local-cache-test:functional-471578
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache delete minikube-local-cache-test:functional-471578
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-471578
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.154434ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 kubectl -- --context functional-471578 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-471578 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-471578 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.68942168s)
functional_test.go:778: restart took 35.689602063s for "functional-471578" cluster.
I0214 20:55:06.281870  250783 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (35.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-471578 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 logs: (1.356228125s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 logs --file /tmp/TestFunctionalserialLogsFileCmd93284878/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 logs --file /tmp/TestFunctionalserialLogsFileCmd93284878/001/logs.txt: (1.334001368s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-471578 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-471578
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-471578: exit status 115 (267.336816ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.172:30317 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-471578 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 config get cpus: exit status 14 (62.091053ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 config get cpus: exit status 14 (45.365488ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471578 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-471578 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 257922: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471578 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (154.026564ms)

                                                
                                                
-- stdout --
	* [functional-471578] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 20:55:16.140676  257790 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:55:16.140842  257790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:16.140855  257790 out.go:358] Setting ErrFile to fd 2...
	I0214 20:55:16.140863  257790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:16.141121  257790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 20:55:16.141780  257790 out.go:352] Setting JSON to false
	I0214 20:55:16.143077  257790 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5860,"bootTime":1739560656,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:55:16.143287  257790 start.go:140] virtualization: kvm guest
	I0214 20:55:16.146042  257790 out.go:177] * [functional-471578] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 20:55:16.147390  257790 notify.go:220] Checking for updates...
	I0214 20:55:16.147410  257790 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 20:55:16.148941  257790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:55:16.150057  257790 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:55:16.151170  257790 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:55:16.152350  257790 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 20:55:16.153388  257790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 20:55:16.154990  257790 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:55:16.155663  257790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:16.155729  257790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.171914  257790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0214 20:55:16.172386  257790 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.172969  257790 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.172987  257790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.173334  257790 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.173574  257790 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.173798  257790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:55:16.174083  257790 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:16.174129  257790 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.191443  257790 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0214 20:55:16.191934  257790 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.192395  257790 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.192410  257790 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.192711  257790 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.192890  257790 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.223543  257790 out.go:177] * Using the kvm2 driver based on existing profile
	I0214 20:55:16.224611  257790 start.go:304] selected driver: kvm2
	I0214 20:55:16.224624  257790 start.go:908] validating driver "kvm2" against &{Name:functional-471578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:functional-471578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:55:16.224747  257790 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 20:55:16.226646  257790 out.go:201] 
	W0214 20:55:16.227659  257790 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0214 20:55:16.228659  257790 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-471578 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-471578 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.845794ms)

                                                
                                                
-- stdout --
	* [functional-471578] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 20:55:15.979962  257750 out.go:345] Setting OutFile to fd 1 ...
	I0214 20:55:15.980052  257750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:15.980059  257750 out.go:358] Setting ErrFile to fd 2...
	I0214 20:55:15.980064  257750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 20:55:15.980304  257750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 20:55:15.980825  257750 out.go:352] Setting JSON to false
	I0214 20:55:15.981767  257750 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5860,"bootTime":1739560656,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 20:55:15.981871  257750 start.go:140] virtualization: kvm guest
	I0214 20:55:15.983733  257750 out.go:177] * [functional-471578] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0214 20:55:15.985482  257750 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 20:55:15.985489  257750 notify.go:220] Checking for updates...
	I0214 20:55:15.987708  257750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 20:55:15.988916  257750 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 20:55:15.989919  257750 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 20:55:15.990938  257750 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 20:55:15.991884  257750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 20:55:15.993115  257750 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 20:55:15.993499  257750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:15.993539  257750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.011426  257750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0214 20:55:16.011835  257750 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.012506  257750 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.012546  257750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.012929  257750 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.013152  257750 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.013428  257750 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 20:55:16.013810  257750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 20:55:16.013862  257750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 20:55:16.030371  257750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I0214 20:55:16.030788  257750 main.go:141] libmachine: () Calling .GetVersion
	I0214 20:55:16.031298  257750 main.go:141] libmachine: Using API Version  1
	I0214 20:55:16.031333  257750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 20:55:16.031708  257750 main.go:141] libmachine: () Calling .GetMachineName
	I0214 20:55:16.031926  257750 main.go:141] libmachine: (functional-471578) Calling .DriverName
	I0214 20:55:16.069409  257750 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0214 20:55:16.070583  257750 start.go:304] selected driver: kvm2
	I0214 20:55:16.070602  257750 start.go:908] validating driver "kvm2" against &{Name:functional-471578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1739182054-20387@sha256:3788b0691001f3da958b3956b3e6c1d1db8535d5286bd2e096e6e75dc609dbad Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterNa
me:functional-471578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.172 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0214 20:55:16.070797  257750 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 20:55:16.072849  257750 out.go:201] 
	W0214 20:55:16.074003  257750 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 20:55:16.075181  257750 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-471578 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-471578 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hfj77" [fa3fc291-1d4f-4383-8342-5159912789e8] Pending
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hfj77" [fa3fc291-1d4f-4383-8342-5159912789e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-hfj77" [fa3fc291-1d4f-4383-8342-5159912789e8] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003845472s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service hello-node-connect --url
2025/02/14 20:55:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.39.172:31574
functional_test.go:1692: http://192.168.39.172:31574: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-hfj77

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.172:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.172:31574
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh -n functional-471578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cp functional-471578:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3921860331/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh -n functional-471578 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh -n functional-471578 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-471578 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-747xg" [a90e7a5a-b398-44ce-93f1-bffccab1a52a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-747xg" [a90e7a5a-b398-44ce-93f1-bffccab1a52a] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003002344s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-471578 exec mysql-58ccfd96bb-747xg -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-471578 exec mysql-58ccfd96bb-747xg -- mysql -ppassword -e "show databases;": exit status 1 (143.416975ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0214 20:55:56.157896  250783 retry.go:31] will retry after 1.096514657s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-471578 exec mysql-58ccfd96bb-747xg -- mysql -ppassword -e "show databases;"
E0214 20:56:42.290953  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 20:57:10.002779  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (21.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/250783/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /etc/test/nested/copy/250783/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/250783.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /etc/ssl/certs/250783.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/250783.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /usr/share/ca-certificates/250783.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/2507832.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /etc/ssl/certs/2507832.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/2507832.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /usr/share/ca-certificates/2507832.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-471578 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "sudo systemctl is-active docker": exit status 1 (221.212529ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "sudo systemctl is-active containerd": exit status 1 (234.399487ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-471578 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-471578 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-8pkq2" [1f45186a-94f5-420e-a40e-3c3aab735c45] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-8pkq2" [1f45186a-94f5-420e-a40e-3c3aab735c45] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004675735s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "328.600019ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "51.236624ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "358.71259ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "59.291929ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdany-port2090134116/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739566514755875550" to /tmp/TestFunctionalparallelMountCmdany-port2090134116/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739566514755875550" to /tmp/TestFunctionalparallelMountCmdany-port2090134116/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739566514755875550" to /tmp/TestFunctionalparallelMountCmdany-port2090134116/001/test-1739566514755875550
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.401575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 20:55:15.048646  250783 retry.go:31] will retry after 488.984236ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 20:55 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 20:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 20:55 test-1739566514755875550
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh cat /mount-9p/test-1739566514755875550
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-471578 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c09ee777-fe60-42ad-aaab-ed0413070a94] Pending
helpers_test.go:344: "busybox-mount" [c09ee777-fe60-42ad-aaab-ed0413070a94] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c09ee777-fe60-42ad-aaab-ed0413070a94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c09ee777-fe60-42ad-aaab-ed0413070a94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004807473s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-471578 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdany-port2090134116/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdspecific-port253566282/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.602477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 20:55:23.745140  250783 retry.go:31] will retry after 257.12481ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdspecific-port253566282/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "sudo umount -f /mount-9p": exit status 1 (242.529954ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-471578 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdspecific-port253566282/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service list -o json
functional_test.go:1511: Took "516.674107ms" to run "out/minikube-linux-amd64 -p functional-471578 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.39.172:31102
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T" /mount1: exit status 1 (324.123557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0214 20:55:25.490272  250783 retry.go:31] will retry after 311.28213ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-471578 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-471578 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249169346/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.39.172:31102
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471578 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-471578
localhost/kicbase/echo-server:functional-471578
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471578 image ls --format short --alsologtostderr:
I0214 20:55:37.802759  259633 out.go:345] Setting OutFile to fd 1 ...
I0214 20:55:37.802926  259633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:37.802938  259633 out.go:358] Setting ErrFile to fd 2...
I0214 20:55:37.802944  259633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:37.803238  259633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
I0214 20:55:37.804065  259633 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:37.804216  259633 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:37.804716  259633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:37.804796  259633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:37.822326  259633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
I0214 20:55:37.822914  259633 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:37.823555  259633 main.go:141] libmachine: Using API Version  1
I0214 20:55:37.823598  259633 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:37.823981  259633 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:37.824189  259633 main.go:141] libmachine: (functional-471578) Calling .GetState
I0214 20:55:37.826224  259633 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:37.826275  259633 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:37.841813  259633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41007
I0214 20:55:37.842188  259633 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:37.842656  259633 main.go:141] libmachine: Using API Version  1
I0214 20:55:37.842680  259633 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:37.843057  259633 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:37.843281  259633 main.go:141] libmachine: (functional-471578) Calling .DriverName
I0214 20:55:37.843499  259633 ssh_runner.go:195] Run: systemctl --version
I0214 20:55:37.843528  259633 main.go:141] libmachine: (functional-471578) Calling .GetSSHHostname
I0214 20:55:37.846370  259633 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:37.846834  259633 main.go:141] libmachine: (functional-471578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:4b:21", ip: ""} in network mk-functional-471578: {Iface:virbr1 ExpiryTime:2025-02-14 21:52:33 +0000 UTC Type:0 Mac:52:54:00:4a:4b:21 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:functional-471578 Clientid:01:52:54:00:4a:4b:21}
I0214 20:55:37.846863  259633 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined IP address 192.168.39.172 and MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:37.846999  259633 main.go:141] libmachine: (functional-471578) Calling .GetSSHPort
I0214 20:55:37.847190  259633 main.go:141] libmachine: (functional-471578) Calling .GetSSHKeyPath
I0214 20:55:37.847368  259633 main.go:141] libmachine: (functional-471578) Calling .GetSSHUsername
I0214 20:55:37.847535  259633 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/functional-471578/id_rsa Username:docker}
I0214 20:55:37.981285  259633 ssh_runner.go:195] Run: sudo crictl images --output json
I0214 20:55:38.228179  259633 main.go:141] libmachine: Making call to close driver server
I0214 20:55:38.228194  259633 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:38.228506  259633 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:38.228528  259633 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:38.228547  259633 main.go:141] libmachine: Making call to close driver server
I0214 20:55:38.228549  259633 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
I0214 20:55:38.228556  259633 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:38.228854  259633 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:38.228874  259633 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471578 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| localhost/kicbase/echo-server           | functional-471578  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-471578  | 762440bc74f23 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471578 image ls --format table --alsologtostderr:
I0214 20:55:39.115988  259758 out.go:345] Setting OutFile to fd 1 ...
I0214 20:55:39.116295  259758 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:39.116305  259758 out.go:358] Setting ErrFile to fd 2...
I0214 20:55:39.116312  259758 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:39.116610  259758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
I0214 20:55:39.117412  259758 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:39.117570  259758 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:39.118083  259758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:39.118133  259758 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:39.133263  259758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43421
I0214 20:55:39.133754  259758 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:39.134383  259758 main.go:141] libmachine: Using API Version  1
I0214 20:55:39.134415  259758 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:39.134817  259758 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:39.135072  259758 main.go:141] libmachine: (functional-471578) Calling .GetState
I0214 20:55:39.137058  259758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:39.137101  259758 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:39.151812  259758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
I0214 20:55:39.152201  259758 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:39.152678  259758 main.go:141] libmachine: Using API Version  1
I0214 20:55:39.152703  259758 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:39.153017  259758 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:39.153235  259758 main.go:141] libmachine: (functional-471578) Calling .DriverName
I0214 20:55:39.153451  259758 ssh_runner.go:195] Run: systemctl --version
I0214 20:55:39.153481  259758 main.go:141] libmachine: (functional-471578) Calling .GetSSHHostname
I0214 20:55:39.156421  259758 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:39.156827  259758 main.go:141] libmachine: (functional-471578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:4b:21", ip: ""} in network mk-functional-471578: {Iface:virbr1 ExpiryTime:2025-02-14 21:52:33 +0000 UTC Type:0 Mac:52:54:00:4a:4b:21 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:functional-471578 Clientid:01:52:54:00:4a:4b:21}
I0214 20:55:39.156855  259758 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined IP address 192.168.39.172 and MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:39.156958  259758 main.go:141] libmachine: (functional-471578) Calling .GetSSHPort
I0214 20:55:39.157140  259758 main.go:141] libmachine: (functional-471578) Calling .GetSSHKeyPath
I0214 20:55:39.157332  259758 main.go:141] libmachine: (functional-471578) Calling .GetSSHUsername
I0214 20:55:39.157487  259758 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/functional-471578/id_rsa Username:docker}
I0214 20:55:39.280124  259758 ssh_runner.go:195] Run: sudo crictl images --output json
I0214 20:55:39.721623  259758 main.go:141] libmachine: Making call to close driver server
I0214 20:55:39.721642  259758 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:39.721997  259758 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:39.722011  259758 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
I0214 20:55:39.722041  259758 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:39.722059  259758 main.go:141] libmachine: Making call to close driver server
I0214 20:55:39.722071  259758 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:39.722369  259758 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:39.722388  259758 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:39.722399  259758 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471578 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["
gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-471578"],"size":"4943877"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc
"],"size":"4631262"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a
8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"762440bc74f23de39284e999dccf30ad3152bc12b6725ffae6c3b24bad0f81b7","repoDigests":["localhost/minikube-local-cache-test@sha256:6aee2fb80a63264c38d189b95e2d08ef9e997708c8faa54d5d9c7fac8e355115"],"repoTags":["localhost/minikub
e-local-cache-test:functional-471578"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":
["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io
/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471578 image ls --format json --alsologtostderr:
I0214 20:55:38.703201  259733 out.go:345] Setting OutFile to fd 1 ...
I0214 20:55:38.703314  259733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.703321  259733 out.go:358] Setting ErrFile to fd 2...
I0214 20:55:38.703326  259733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.703563  259733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
I0214 20:55:38.704369  259733 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.704516  259733 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.705029  259733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.705084  259733 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.720567  259733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
I0214 20:55:38.720991  259733 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.721548  259733 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.721574  259733 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.722045  259733 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.722302  259733 main.go:141] libmachine: (functional-471578) Calling .GetState
I0214 20:55:38.724409  259733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.724459  259733 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.739008  259733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
I0214 20:55:38.739500  259733 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.740092  259733 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.740123  259733 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.740450  259733 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.740635  259733 main.go:141] libmachine: (functional-471578) Calling .DriverName
I0214 20:55:38.740824  259733 ssh_runner.go:195] Run: systemctl --version
I0214 20:55:38.740865  259733 main.go:141] libmachine: (functional-471578) Calling .GetSSHHostname
I0214 20:55:38.743474  259733 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.743845  259733 main.go:141] libmachine: (functional-471578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:4b:21", ip: ""} in network mk-functional-471578: {Iface:virbr1 ExpiryTime:2025-02-14 21:52:33 +0000 UTC Type:0 Mac:52:54:00:4a:4b:21 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:functional-471578 Clientid:01:52:54:00:4a:4b:21}
I0214 20:55:38.743879  259733 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined IP address 192.168.39.172 and MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.743987  259733 main.go:141] libmachine: (functional-471578) Calling .GetSSHPort
I0214 20:55:38.744173  259733 main.go:141] libmachine: (functional-471578) Calling .GetSSHKeyPath
I0214 20:55:38.744344  259733 main.go:141] libmachine: (functional-471578) Calling .GetSSHUsername
I0214 20:55:38.744495  259733 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/functional-471578/id_rsa Username:docker}
I0214 20:55:38.864992  259733 ssh_runner.go:195] Run: sudo crictl images --output json
I0214 20:55:39.053611  259733 main.go:141] libmachine: Making call to close driver server
I0214 20:55:39.053631  259733 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:39.053909  259733 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:39.053926  259733 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:39.053943  259733 main.go:141] libmachine: Making call to close driver server
I0214 20:55:39.053956  259733 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:39.054197  259733 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:39.054215  259733 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:39.054246  259733 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471578 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-471578
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 762440bc74f23de39284e999dccf30ad3152bc12b6725ffae6c3b24bad0f81b7
repoDigests:
- localhost/minikube-local-cache-test@sha256:6aee2fb80a63264c38d189b95e2d08ef9e997708c8faa54d5d9c7fac8e355115
repoTags:
- localhost/minikube-local-cache-test:functional-471578
size: "3330"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471578 image ls --format yaml --alsologtostderr:
I0214 20:55:38.285569  259657 out.go:345] Setting OutFile to fd 1 ...
I0214 20:55:38.285687  259657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.285698  259657 out.go:358] Setting ErrFile to fd 2...
I0214 20:55:38.285704  259657 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.287503  259657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
I0214 20:55:38.288340  259657 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.288477  259657 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.288910  259657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.288952  259657 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.306179  259657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
I0214 20:55:38.306676  259657 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.307324  259657 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.307360  259657 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.307788  259657 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.308029  259657 main.go:141] libmachine: (functional-471578) Calling .GetState
I0214 20:55:38.309906  259657 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.309956  259657 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.326120  259657 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42463
I0214 20:55:38.326706  259657 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.327233  259657 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.327249  259657 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.327559  259657 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.327719  259657 main.go:141] libmachine: (functional-471578) Calling .DriverName
I0214 20:55:38.327865  259657 ssh_runner.go:195] Run: systemctl --version
I0214 20:55:38.327889  259657 main.go:141] libmachine: (functional-471578) Calling .GetSSHHostname
I0214 20:55:38.330653  259657 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.331119  259657 main.go:141] libmachine: (functional-471578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:4b:21", ip: ""} in network mk-functional-471578: {Iface:virbr1 ExpiryTime:2025-02-14 21:52:33 +0000 UTC Type:0 Mac:52:54:00:4a:4b:21 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:functional-471578 Clientid:01:52:54:00:4a:4b:21}
I0214 20:55:38.331137  259657 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined IP address 192.168.39.172 and MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.331292  259657 main.go:141] libmachine: (functional-471578) Calling .GetSSHPort
I0214 20:55:38.331438  259657 main.go:141] libmachine: (functional-471578) Calling .GetSSHKeyPath
I0214 20:55:38.331535  259657 main.go:141] libmachine: (functional-471578) Calling .GetSSHUsername
I0214 20:55:38.331637  259657 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/functional-471578/id_rsa Username:docker}
I0214 20:55:38.461219  259657 ssh_runner.go:195] Run: sudo crictl images --output json
I0214 20:55:38.638910  259657 main.go:141] libmachine: Making call to close driver server
I0214 20:55:38.638926  259657 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:38.639152  259657 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:38.639167  259657 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:38.639181  259657 main.go:141] libmachine: Making call to close driver server
I0214 20:55:38.639188  259657 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:38.639192  259657 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
I0214 20:55:38.639391  259657 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
I0214 20:55:38.639430  259657 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:38.639442  259657 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-471578 ssh pgrep buildkitd: exit status 1 (245.303049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image build -t localhost/my-image:functional-471578 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 image build -t localhost/my-image:functional-471578 testdata/build --alsologtostderr: (10.171308799s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-471578 image build -t localhost/my-image:functional-471578 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9455d8888b4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-471578
--> 0af0c814b7d
Successfully tagged localhost/my-image:functional-471578
0af0c814b7d7158b82b3d7025c9d4f6392236db2050be6c9e081330f5f2ba8d7
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-471578 image build -t localhost/my-image:functional-471578 testdata/build --alsologtostderr:
I0214 20:55:38.592643  259710 out.go:345] Setting OutFile to fd 1 ...
I0214 20:55:38.592785  259710 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.592795  259710 out.go:358] Setting ErrFile to fd 2...
I0214 20:55:38.592799  259710 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0214 20:55:38.593005  259710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
I0214 20:55:38.593565  259710 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.594212  259710 config.go:182] Loaded profile config "functional-471578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0214 20:55:38.594790  259710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.594837  259710 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.611119  259710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43945
I0214 20:55:38.611513  259710 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.612224  259710 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.612258  259710 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.612612  259710 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.612865  259710 main.go:141] libmachine: (functional-471578) Calling .GetState
I0214 20:55:38.614720  259710 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0214 20:55:38.614768  259710 main.go:141] libmachine: Launching plugin server for driver kvm2
I0214 20:55:38.629014  259710 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
I0214 20:55:38.629379  259710 main.go:141] libmachine: () Calling .GetVersion
I0214 20:55:38.629820  259710 main.go:141] libmachine: Using API Version  1
I0214 20:55:38.629840  259710 main.go:141] libmachine: () Calling .SetConfigRaw
I0214 20:55:38.630159  259710 main.go:141] libmachine: () Calling .GetMachineName
I0214 20:55:38.630406  259710 main.go:141] libmachine: (functional-471578) Calling .DriverName
I0214 20:55:38.630656  259710 ssh_runner.go:195] Run: systemctl --version
I0214 20:55:38.630694  259710 main.go:141] libmachine: (functional-471578) Calling .GetSSHHostname
I0214 20:55:38.633222  259710 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.633585  259710 main.go:141] libmachine: (functional-471578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:4b:21", ip: ""} in network mk-functional-471578: {Iface:virbr1 ExpiryTime:2025-02-14 21:52:33 +0000 UTC Type:0 Mac:52:54:00:4a:4b:21 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:functional-471578 Clientid:01:52:54:00:4a:4b:21}
I0214 20:55:38.633627  259710 main.go:141] libmachine: (functional-471578) DBG | domain functional-471578 has defined IP address 192.168.39.172 and MAC address 52:54:00:4a:4b:21 in network mk-functional-471578
I0214 20:55:38.633766  259710 main.go:141] libmachine: (functional-471578) Calling .GetSSHPort
I0214 20:55:38.633937  259710 main.go:141] libmachine: (functional-471578) Calling .GetSSHKeyPath
I0214 20:55:38.634115  259710 main.go:141] libmachine: (functional-471578) Calling .GetSSHUsername
I0214 20:55:38.634260  259710 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/functional-471578/id_rsa Username:docker}
I0214 20:55:38.778291  259710 build_images.go:161] Building image from path: /tmp/build.4285227264.tar
I0214 20:55:38.778372  259710 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0214 20:55:38.805015  259710 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4285227264.tar
I0214 20:55:38.819417  259710 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4285227264.tar: stat -c "%s %y" /var/lib/minikube/build/build.4285227264.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4285227264.tar': No such file or directory
I0214 20:55:38.819448  259710 ssh_runner.go:362] scp /tmp/build.4285227264.tar --> /var/lib/minikube/build/build.4285227264.tar (3072 bytes)
I0214 20:55:38.876161  259710 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4285227264
I0214 20:55:38.901159  259710 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4285227264 -xf /var/lib/minikube/build/build.4285227264.tar
I0214 20:55:38.936327  259710 crio.go:315] Building image: /var/lib/minikube/build/build.4285227264
I0214 20:55:38.936401  259710 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-471578 /var/lib/minikube/build/build.4285227264 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0214 20:55:48.684564  259710 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-471578 /var/lib/minikube/build/build.4285227264 --cgroup-manager=cgroupfs: (9.748134809s)
I0214 20:55:48.684634  259710 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4285227264
I0214 20:55:48.695657  259710 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4285227264.tar
I0214 20:55:48.707628  259710 build_images.go:217] Built localhost/my-image:functional-471578 from /tmp/build.4285227264.tar
I0214 20:55:48.707658  259710 build_images.go:133] succeeded building to: functional-471578
I0214 20:55:48.707664  259710 build_images.go:134] failed building to: 
I0214 20:55:48.707695  259710 main.go:141] libmachine: Making call to close driver server
I0214 20:55:48.707711  259710 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:48.708037  259710 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:48.708054  259710 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:48.708090  259710 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
I0214 20:55:48.708150  259710 main.go:141] libmachine: Making call to close driver server
I0214 20:55:48.708161  259710 main.go:141] libmachine: (functional-471578) Calling .Close
I0214 20:55:48.708399  259710 main.go:141] libmachine: Successfully made call to close driver server
I0214 20:55:48.708423  259710 main.go:141] libmachine: Making call to close connection to plugin binary
I0214 20:55:48.708450  259710 main.go:141] libmachine: (functional-471578) DBG | Closing plugin on server side
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.447326075s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-471578
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image load --daemon kicbase/echo-server:functional-471578 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-471578 image load --daemon kicbase/echo-server:functional-471578 --alsologtostderr: (3.422729592s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image load --daemon kicbase/echo-server:functional-471578 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-471578
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image load --daemon kicbase/echo-server:functional-471578 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image save kicbase/echo-server:functional-471578 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image rm kicbase/echo-server:functional-471578 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-471578
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-471578 image save --daemon kicbase/echo-server:functional-471578 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-471578
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-471578
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-471578
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-471578
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0214 21:00:13.382465  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.388868  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.400233  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.421597  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.462981  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.544407  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:13.705930  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:14.027628  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:14.669652  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:15.951921  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:18.513676  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:23.635059  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:33.876674  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:00:54.358215  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:01:35.319705  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:01:42.290440  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 start --ha --memory 2200 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m6.537475037s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (187.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 kubectl -- rollout status deployment/busybox: (3.271488511s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-52k6p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-lc446 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-v5x7n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-52k6p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-lc446 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-v5x7n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-52k6p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-lc446 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-v5x7n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-52k6p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-52k6p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-lc446 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-lc446 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-v5x7n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 kubectl -- exec busybox-58667487b6-v5x7n -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 node add --alsologtostderr -v 5: (49.764758415s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (50.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-018577 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --output json --alsologtostderr -v 5
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp testdata/cp-test.txt ha-018577:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1594434162/001/cp-test_ha-018577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577:/home/docker/cp-test.txt ha-018577-m02:/home/docker/cp-test_ha-018577_ha-018577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test_ha-018577_ha-018577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577:/home/docker/cp-test.txt ha-018577-m03:/home/docker/cp-test_ha-018577_ha-018577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test_ha-018577_ha-018577-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577:/home/docker/cp-test.txt ha-018577-m04:/home/docker/cp-test_ha-018577_ha-018577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test_ha-018577_ha-018577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp testdata/cp-test.txt ha-018577-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1594434162/001/cp-test_ha-018577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m02:/home/docker/cp-test.txt ha-018577:/home/docker/cp-test_ha-018577-m02_ha-018577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test_ha-018577-m02_ha-018577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m02:/home/docker/cp-test.txt ha-018577-m03:/home/docker/cp-test_ha-018577-m02_ha-018577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test_ha-018577-m02_ha-018577-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m02:/home/docker/cp-test.txt ha-018577-m04:/home/docker/cp-test_ha-018577-m02_ha-018577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test_ha-018577-m02_ha-018577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp testdata/cp-test.txt ha-018577-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1594434162/001/cp-test_ha-018577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m03:/home/docker/cp-test.txt ha-018577:/home/docker/cp-test_ha-018577-m03_ha-018577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test_ha-018577-m03_ha-018577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m03:/home/docker/cp-test.txt ha-018577-m02:/home/docker/cp-test_ha-018577-m03_ha-018577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test_ha-018577-m03_ha-018577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m03:/home/docker/cp-test.txt ha-018577-m04:/home/docker/cp-test_ha-018577-m03_ha-018577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test_ha-018577-m03_ha-018577-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp testdata/cp-test.txt ha-018577-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1594434162/001/cp-test_ha-018577-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m04:/home/docker/cp-test.txt ha-018577:/home/docker/cp-test_ha-018577-m04_ha-018577.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577 "sudo cat /home/docker/cp-test_ha-018577-m04_ha-018577.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m04:/home/docker/cp-test.txt ha-018577-m02:/home/docker/cp-test_ha-018577-m04_ha-018577-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m02 "sudo cat /home/docker/cp-test_ha-018577-m04_ha-018577-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 cp ha-018577-m04:/home/docker/cp-test.txt ha-018577-m03:/home/docker/cp-test_ha-018577-m04_ha-018577-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 ssh -n ha-018577-m03 "sudo cat /home/docker/cp-test_ha-018577-m04_ha-018577-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node stop m02 --alsologtostderr -v 5
E0214 21:02:57.241621  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 node stop m02 --alsologtostderr -v 5: (1m30.646598612s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5: exit status 7 (639.422876ms)

                                                
                                                
-- stdout --
	ha-018577
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-018577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-018577-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-018577-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:04:25.932721  264979 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:04:25.933127  264979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:04:25.933145  264979 out.go:358] Setting ErrFile to fd 2...
	I0214 21:04:25.933154  264979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:04:25.933485  264979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:04:25.933666  264979 out.go:352] Setting JSON to false
	I0214 21:04:25.933700  264979 mustload.go:65] Loading cluster: ha-018577
	I0214 21:04:25.933808  264979 notify.go:220] Checking for updates...
	I0214 21:04:25.934097  264979 config.go:182] Loaded profile config "ha-018577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:04:25.934123  264979 status.go:174] checking status of ha-018577 ...
	I0214 21:04:25.934517  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:25.934572  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:25.956039  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0214 21:04:25.956425  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:25.956980  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:25.957003  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:25.957371  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:25.957576  264979 main.go:141] libmachine: (ha-018577) Calling .GetState
	I0214 21:04:25.959122  264979 status.go:371] ha-018577 host status = "Running" (err=<nil>)
	I0214 21:04:25.959137  264979 host.go:66] Checking if "ha-018577" exists ...
	I0214 21:04:25.959403  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:25.959433  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:25.973979  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I0214 21:04:25.974323  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:25.974785  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:25.974812  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:25.975116  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:25.975271  264979 main.go:141] libmachine: (ha-018577) Calling .GetIP
	I0214 21:04:25.977801  264979 main.go:141] libmachine: (ha-018577) DBG | domain ha-018577 has defined MAC address 52:54:00:24:9a:c4 in network mk-ha-018577
	I0214 21:04:25.978187  264979 main.go:141] libmachine: (ha-018577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:9a:c4", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 21:58:51 +0000 UTC Type:0 Mac:52:54:00:24:9a:c4 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-018577 Clientid:01:52:54:00:24:9a:c4}
	I0214 21:04:25.978209  264979 main.go:141] libmachine: (ha-018577) DBG | domain ha-018577 has defined IP address 192.168.39.150 and MAC address 52:54:00:24:9a:c4 in network mk-ha-018577
	I0214 21:04:25.978367  264979 host.go:66] Checking if "ha-018577" exists ...
	I0214 21:04:25.978794  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:25.978833  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:25.992666  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I0214 21:04:25.992978  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:25.993376  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:25.993396  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:25.993690  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:25.993881  264979 main.go:141] libmachine: (ha-018577) Calling .DriverName
	I0214 21:04:25.994072  264979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:04:25.994095  264979 main.go:141] libmachine: (ha-018577) Calling .GetSSHHostname
	I0214 21:04:25.996627  264979 main.go:141] libmachine: (ha-018577) DBG | domain ha-018577 has defined MAC address 52:54:00:24:9a:c4 in network mk-ha-018577
	I0214 21:04:25.997117  264979 main.go:141] libmachine: (ha-018577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:9a:c4", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 21:58:51 +0000 UTC Type:0 Mac:52:54:00:24:9a:c4 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-018577 Clientid:01:52:54:00:24:9a:c4}
	I0214 21:04:25.997144  264979 main.go:141] libmachine: (ha-018577) DBG | domain ha-018577 has defined IP address 192.168.39.150 and MAC address 52:54:00:24:9a:c4 in network mk-ha-018577
	I0214 21:04:25.997267  264979 main.go:141] libmachine: (ha-018577) Calling .GetSSHPort
	I0214 21:04:25.997472  264979 main.go:141] libmachine: (ha-018577) Calling .GetSSHKeyPath
	I0214 21:04:25.997620  264979 main.go:141] libmachine: (ha-018577) Calling .GetSSHUsername
	I0214 21:04:25.997761  264979 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/ha-018577/id_rsa Username:docker}
	I0214 21:04:26.085661  264979 ssh_runner.go:195] Run: systemctl --version
	I0214 21:04:26.091890  264979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:04:26.107319  264979 kubeconfig.go:125] found "ha-018577" server: "https://192.168.39.254:8443"
	I0214 21:04:26.107353  264979 api_server.go:166] Checking apiserver status ...
	I0214 21:04:26.107390  264979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:04:26.125993  264979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1098/cgroup
	W0214 21:04:26.137385  264979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1098/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0214 21:04:26.137418  264979 ssh_runner.go:195] Run: ls
	I0214 21:04:26.142268  264979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0214 21:04:26.146999  264979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0214 21:04:26.147020  264979 status.go:463] ha-018577 apiserver status = Running (err=<nil>)
	I0214 21:04:26.147029  264979 status.go:176] ha-018577 status: &{Name:ha-018577 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:04:26.147072  264979 status.go:174] checking status of ha-018577-m02 ...
	I0214 21:04:26.147378  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.147413  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.162321  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41371
	I0214 21:04:26.162851  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.163355  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.163376  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.163710  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.163926  264979 main.go:141] libmachine: (ha-018577-m02) Calling .GetState
	I0214 21:04:26.165645  264979 status.go:371] ha-018577-m02 host status = "Stopped" (err=<nil>)
	I0214 21:04:26.165658  264979 status.go:384] host is not running, skipping remaining checks
	I0214 21:04:26.165663  264979 status.go:176] ha-018577-m02 status: &{Name:ha-018577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:04:26.165676  264979 status.go:174] checking status of ha-018577-m03 ...
	I0214 21:04:26.165944  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.165975  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.179889  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0214 21:04:26.180328  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.180789  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.180809  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.181099  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.181286  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetState
	I0214 21:04:26.182704  264979 status.go:371] ha-018577-m03 host status = "Running" (err=<nil>)
	I0214 21:04:26.182722  264979 host.go:66] Checking if "ha-018577-m03" exists ...
	I0214 21:04:26.182991  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.183030  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.196913  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45425
	I0214 21:04:26.197285  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.197754  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.197775  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.198122  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.198316  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetIP
	I0214 21:04:26.201365  264979 main.go:141] libmachine: (ha-018577-m03) DBG | domain ha-018577-m03 has defined MAC address 52:54:00:76:bd:e9 in network mk-ha-018577
	I0214 21:04:26.201812  264979 main.go:141] libmachine: (ha-018577-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:bd:e9", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 22:00:46 +0000 UTC Type:0 Mac:52:54:00:76:bd:e9 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-018577-m03 Clientid:01:52:54:00:76:bd:e9}
	I0214 21:04:26.201832  264979 main.go:141] libmachine: (ha-018577-m03) DBG | domain ha-018577-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:bd:e9 in network mk-ha-018577
	I0214 21:04:26.202013  264979 host.go:66] Checking if "ha-018577-m03" exists ...
	I0214 21:04:26.202303  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.202336  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.215993  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0214 21:04:26.216341  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.216755  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.216772  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.217081  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.217277  264979 main.go:141] libmachine: (ha-018577-m03) Calling .DriverName
	I0214 21:04:26.217472  264979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:04:26.217494  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetSSHHostname
	I0214 21:04:26.220065  264979 main.go:141] libmachine: (ha-018577-m03) DBG | domain ha-018577-m03 has defined MAC address 52:54:00:76:bd:e9 in network mk-ha-018577
	I0214 21:04:26.220478  264979 main.go:141] libmachine: (ha-018577-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:bd:e9", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 22:00:46 +0000 UTC Type:0 Mac:52:54:00:76:bd:e9 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-018577-m03 Clientid:01:52:54:00:76:bd:e9}
	I0214 21:04:26.220504  264979 main.go:141] libmachine: (ha-018577-m03) DBG | domain ha-018577-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:76:bd:e9 in network mk-ha-018577
	I0214 21:04:26.220643  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetSSHPort
	I0214 21:04:26.220802  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetSSHKeyPath
	I0214 21:04:26.220911  264979 main.go:141] libmachine: (ha-018577-m03) Calling .GetSSHUsername
	I0214 21:04:26.221010  264979 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/ha-018577-m03/id_rsa Username:docker}
	I0214 21:04:26.308686  264979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:04:26.326407  264979 kubeconfig.go:125] found "ha-018577" server: "https://192.168.39.254:8443"
	I0214 21:04:26.326433  264979 api_server.go:166] Checking apiserver status ...
	I0214 21:04:26.326468  264979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:04:26.342058  264979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0214 21:04:26.352158  264979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0214 21:04:26.352211  264979 ssh_runner.go:195] Run: ls
	I0214 21:04:26.356493  264979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0214 21:04:26.361850  264979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0214 21:04:26.361875  264979 status.go:463] ha-018577-m03 apiserver status = Running (err=<nil>)
	I0214 21:04:26.361885  264979 status.go:176] ha-018577-m03 status: &{Name:ha-018577-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:04:26.361904  264979 status.go:174] checking status of ha-018577-m04 ...
	I0214 21:04:26.362287  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.362338  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.377462  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0214 21:04:26.377845  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.378294  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.378312  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.378655  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.378891  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetState
	I0214 21:04:26.380363  264979 status.go:371] ha-018577-m04 host status = "Running" (err=<nil>)
	I0214 21:04:26.380377  264979 host.go:66] Checking if "ha-018577-m04" exists ...
	I0214 21:04:26.380627  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.380658  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.394990  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I0214 21:04:26.395358  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.395809  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.395838  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.396174  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.396417  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetIP
	I0214 21:04:26.399100  264979 main.go:141] libmachine: (ha-018577-m04) DBG | domain ha-018577-m04 has defined MAC address 52:54:00:7a:86:c4 in network mk-ha-018577
	I0214 21:04:26.399510  264979 main.go:141] libmachine: (ha-018577-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:86:c4", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 22:02:05 +0000 UTC Type:0 Mac:52:54:00:7a:86:c4 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-018577-m04 Clientid:01:52:54:00:7a:86:c4}
	I0214 21:04:26.399543  264979 main.go:141] libmachine: (ha-018577-m04) DBG | domain ha-018577-m04 has defined IP address 192.168.39.208 and MAC address 52:54:00:7a:86:c4 in network mk-ha-018577
	I0214 21:04:26.399678  264979 host.go:66] Checking if "ha-018577-m04" exists ...
	I0214 21:04:26.399994  264979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:04:26.400040  264979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:04:26.413467  264979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0214 21:04:26.413844  264979 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:04:26.414410  264979 main.go:141] libmachine: Using API Version  1
	I0214 21:04:26.414427  264979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:04:26.414789  264979 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:04:26.414956  264979 main.go:141] libmachine: (ha-018577-m04) Calling .DriverName
	I0214 21:04:26.415107  264979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:04:26.415130  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetSSHHostname
	I0214 21:04:26.417369  264979 main.go:141] libmachine: (ha-018577-m04) DBG | domain ha-018577-m04 has defined MAC address 52:54:00:7a:86:c4 in network mk-ha-018577
	I0214 21:04:26.417746  264979 main.go:141] libmachine: (ha-018577-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7a:86:c4", ip: ""} in network mk-ha-018577: {Iface:virbr1 ExpiryTime:2025-02-14 22:02:05 +0000 UTC Type:0 Mac:52:54:00:7a:86:c4 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-018577-m04 Clientid:01:52:54:00:7a:86:c4}
	I0214 21:04:26.417773  264979 main.go:141] libmachine: (ha-018577-m04) DBG | domain ha-018577-m04 has defined IP address 192.168.39.208 and MAC address 52:54:00:7a:86:c4 in network mk-ha-018577
	I0214 21:04:26.417896  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetSSHPort
	I0214 21:04:26.418075  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetSSHKeyPath
	I0214 21:04:26.418206  264979 main.go:141] libmachine: (ha-018577-m04) Calling .GetSSHUsername
	I0214 21:04:26.418312  264979 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/ha-018577-m04/id_rsa Username:docker}
	I0214 21:04:26.506595  264979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:04:26.524418  264979 status.go:176] ha-018577-m04 status: &{Name:ha-018577-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 node start m02 --alsologtostderr -v 5: (22.855368847s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 stop --alsologtostderr -v 5
E0214 21:05:13.381781  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:05:41.082926  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:06:42.291200  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:08:05.365127  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 stop --alsologtostderr -v 5: (4m33.668651766s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 start --wait true --alsologtostderr -v 5
E0214 21:10:13.382374  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 start --wait true --alsologtostderr -v 5: (2m6.149920084s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node delete m03 --alsologtostderr -v 5
E0214 21:11:42.294793  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 node delete m03 --alsologtostderr -v 5: (17.366358345s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 stop --alsologtostderr -v 5
E0214 21:15:13.381621  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 stop --alsologtostderr -v 5: (4m32.389887868s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5: exit status 7 (112.47391ms)

                                                
                                                
-- stdout --
	ha-018577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-018577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-018577-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:16:22.984827  268951 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:16:22.984951  268951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:16:22.984960  268951 out.go:358] Setting ErrFile to fd 2...
	I0214 21:16:22.984964  268951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:16:22.985134  268951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:16:22.985342  268951 out.go:352] Setting JSON to false
	I0214 21:16:22.985381  268951 mustload.go:65] Loading cluster: ha-018577
	I0214 21:16:22.985496  268951 notify.go:220] Checking for updates...
	I0214 21:16:22.985781  268951 config.go:182] Loaded profile config "ha-018577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:16:22.985806  268951 status.go:174] checking status of ha-018577 ...
	I0214 21:16:22.986212  268951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:16:22.986251  268951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:16:23.010424  268951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I0214 21:16:23.010876  268951 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:16:23.011387  268951 main.go:141] libmachine: Using API Version  1
	I0214 21:16:23.011410  268951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:16:23.011789  268951 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:16:23.011971  268951 main.go:141] libmachine: (ha-018577) Calling .GetState
	I0214 21:16:23.013344  268951 status.go:371] ha-018577 host status = "Stopped" (err=<nil>)
	I0214 21:16:23.013359  268951 status.go:384] host is not running, skipping remaining checks
	I0214 21:16:23.013390  268951 status.go:176] ha-018577 status: &{Name:ha-018577 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:16:23.013417  268951 status.go:174] checking status of ha-018577-m02 ...
	I0214 21:16:23.013813  268951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:16:23.013860  268951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:16:23.027903  268951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0214 21:16:23.028289  268951 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:16:23.028650  268951 main.go:141] libmachine: Using API Version  1
	I0214 21:16:23.028673  268951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:16:23.028985  268951 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:16:23.029157  268951 main.go:141] libmachine: (ha-018577-m02) Calling .GetState
	I0214 21:16:23.030479  268951 status.go:371] ha-018577-m02 host status = "Stopped" (err=<nil>)
	I0214 21:16:23.030491  268951 status.go:384] host is not running, skipping remaining checks
	I0214 21:16:23.030496  268951 status.go:176] ha-018577-m02 status: &{Name:ha-018577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:16:23.030508  268951 status.go:174] checking status of ha-018577-m04 ...
	I0214 21:16:23.030871  268951 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:16:23.030911  268951 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:16:23.044626  268951 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0214 21:16:23.045027  268951 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:16:23.045464  268951 main.go:141] libmachine: Using API Version  1
	I0214 21:16:23.045498  268951 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:16:23.045816  268951 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:16:23.046007  268951 main.go:141] libmachine: (ha-018577-m04) Calling .GetState
	I0214 21:16:23.047390  268951 status.go:371] ha-018577-m04 host status = "Stopped" (err=<nil>)
	I0214 21:16:23.047406  268951 status.go:384] host is not running, skipping remaining checks
	I0214 21:16:23.047411  268951 status.go:176] ha-018577-m04 status: &{Name:ha-018577-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E0214 21:16:36.445033  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:16:42.290559  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m58.264515974s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (69.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-018577 node add --control-plane --alsologtostderr -v 5: (1m8.783460065s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-018577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (69.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-363752 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0214 21:20:13.382214  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-363752 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.794229619s)
--- PASS: TestJSONOutput/start/Command (80.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-363752 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-363752 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-363752 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-363752 --output=json --user=testUser: (7.358252958s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-767833 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-767833 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.288012ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a393f8a-4da6-43f8-a2a3-4781896e22e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-767833] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c57a2cc7-ab2a-4105-a5b1-260ad240f349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20315"}}
	{"specversion":"1.0","id":"9e5c74e4-5839-4567-86cf-e2e220dd3ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cf1a3a09-7b89-45e4-9d23-3f952a54c917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig"}}
	{"specversion":"1.0","id":"ed78f09a-7d12-41d4-9a98-73ef035bb320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube"}}
	{"specversion":"1.0","id":"1fb9a1c2-afe1-4d8d-8ad8-0c8d622be8e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ac11e1c8-0d87-4ba8-a9bd-6777bd9a0b49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a6be904c-f5b4-44c4-bcda-b588450b37f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-767833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-767833
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (85.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-573652 --driver=kvm2  --container-runtime=crio
E0214 21:21:42.298146  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-573652 --driver=kvm2  --container-runtime=crio: (41.45908787s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-588729 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-588729 --driver=kvm2  --container-runtime=crio: (41.231975157s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-573652
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-588729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-588729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-588729
helpers_test.go:175: Cleaning up "first-573652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-573652
--- PASS: TestMinikubeProfile (85.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-837914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-837914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.29882578s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837914 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-837914 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857422 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.871542026s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-837914 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-857422
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-857422: (1.274793275s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857422
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857422: (20.730361845s)
--- PASS: TestMountStart/serial/RestartStopped (21.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857422 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-806010 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0214 21:24:45.367517  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:25:13.382203  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-806010 --wait=true --memory=2200 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.956078866s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-806010 -- rollout status deployment/busybox: (3.524619422s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-7nxx5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-85vd6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-7nxx5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-85vd6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-7nxx5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-85vd6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-7nxx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-7nxx5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-85vd6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-806010 -- exec busybox-58667487b6-85vd6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-806010 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-806010 -v=5 --alsologtostderr: (48.01462407s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-806010 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp testdata/cp-test.txt multinode-806010:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1888248735/001/cp-test_multinode-806010.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010:/home/docker/cp-test.txt multinode-806010-m02:/home/docker/cp-test_multinode-806010_multinode-806010-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test_multinode-806010_multinode-806010-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010:/home/docker/cp-test.txt multinode-806010-m03:/home/docker/cp-test_multinode-806010_multinode-806010-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test_multinode-806010_multinode-806010-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp testdata/cp-test.txt multinode-806010-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1888248735/001/cp-test_multinode-806010-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m02:/home/docker/cp-test.txt multinode-806010:/home/docker/cp-test_multinode-806010-m02_multinode-806010.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test_multinode-806010-m02_multinode-806010.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m02:/home/docker/cp-test.txt multinode-806010-m03:/home/docker/cp-test_multinode-806010-m02_multinode-806010-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test_multinode-806010-m02_multinode-806010-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp testdata/cp-test.txt multinode-806010-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1888248735/001/cp-test_multinode-806010-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m03:/home/docker/cp-test.txt multinode-806010:/home/docker/cp-test_multinode-806010-m03_multinode-806010.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010 "sudo cat /home/docker/cp-test_multinode-806010-m03_multinode-806010.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 cp multinode-806010-m03:/home/docker/cp-test.txt multinode-806010-m02:/home/docker/cp-test_multinode-806010-m03_multinode-806010-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 ssh -n multinode-806010-m02 "sudo cat /home/docker/cp-test_multinode-806010-m03_multinode-806010-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 node stop m03
E0214 21:26:42.290158  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-806010 node stop m03: (1.388856191s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-806010 status: exit status 7 (421.081608ms)

                                                
                                                
-- stdout --
	multinode-806010
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-806010-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-806010-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr: exit status 7 (414.839034ms)

                                                
                                                
-- stdout --
	multinode-806010
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-806010-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-806010-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:26:43.133420  276832 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:26:43.133514  276832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:43.133522  276832 out.go:358] Setting ErrFile to fd 2...
	I0214 21:26:43.133527  276832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:26:43.133689  276832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:26:43.133851  276832 out.go:352] Setting JSON to false
	I0214 21:26:43.133877  276832 mustload.go:65] Loading cluster: multinode-806010
	I0214 21:26:43.134014  276832 notify.go:220] Checking for updates...
	I0214 21:26:43.134317  276832 config.go:182] Loaded profile config "multinode-806010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:26:43.134347  276832 status.go:174] checking status of multinode-806010 ...
	I0214 21:26:43.134916  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.134965  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.151517  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36295
	I0214 21:26:43.151974  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.152632  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.152666  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.152989  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.153215  276832 main.go:141] libmachine: (multinode-806010) Calling .GetState
	I0214 21:26:43.154704  276832 status.go:371] multinode-806010 host status = "Running" (err=<nil>)
	I0214 21:26:43.154721  276832 host.go:66] Checking if "multinode-806010" exists ...
	I0214 21:26:43.155026  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.155059  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.170479  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40227
	I0214 21:26:43.170850  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.171407  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.171435  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.171770  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.171955  276832 main.go:141] libmachine: (multinode-806010) Calling .GetIP
	I0214 21:26:43.174740  276832 main.go:141] libmachine: (multinode-806010) DBG | domain multinode-806010 has defined MAC address 52:54:00:9a:5b:cc in network mk-multinode-806010
	I0214 21:26:43.175128  276832 main.go:141] libmachine: (multinode-806010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:5b:cc", ip: ""} in network mk-multinode-806010: {Iface:virbr1 ExpiryTime:2025-02-14 22:24:05 +0000 UTC Type:0 Mac:52:54:00:9a:5b:cc Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-806010 Clientid:01:52:54:00:9a:5b:cc}
	I0214 21:26:43.175147  276832 main.go:141] libmachine: (multinode-806010) DBG | domain multinode-806010 has defined IP address 192.168.39.95 and MAC address 52:54:00:9a:5b:cc in network mk-multinode-806010
	I0214 21:26:43.175306  276832 host.go:66] Checking if "multinode-806010" exists ...
	I0214 21:26:43.175570  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.175610  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.189979  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37583
	I0214 21:26:43.190368  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.190903  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.190925  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.191252  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.191540  276832 main.go:141] libmachine: (multinode-806010) Calling .DriverName
	I0214 21:26:43.191740  276832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:26:43.191779  276832 main.go:141] libmachine: (multinode-806010) Calling .GetSSHHostname
	I0214 21:26:43.194573  276832 main.go:141] libmachine: (multinode-806010) DBG | domain multinode-806010 has defined MAC address 52:54:00:9a:5b:cc in network mk-multinode-806010
	I0214 21:26:43.194982  276832 main.go:141] libmachine: (multinode-806010) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:5b:cc", ip: ""} in network mk-multinode-806010: {Iface:virbr1 ExpiryTime:2025-02-14 22:24:05 +0000 UTC Type:0 Mac:52:54:00:9a:5b:cc Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:multinode-806010 Clientid:01:52:54:00:9a:5b:cc}
	I0214 21:26:43.195008  276832 main.go:141] libmachine: (multinode-806010) DBG | domain multinode-806010 has defined IP address 192.168.39.95 and MAC address 52:54:00:9a:5b:cc in network mk-multinode-806010
	I0214 21:26:43.195147  276832 main.go:141] libmachine: (multinode-806010) Calling .GetSSHPort
	I0214 21:26:43.195310  276832 main.go:141] libmachine: (multinode-806010) Calling .GetSSHKeyPath
	I0214 21:26:43.195469  276832 main.go:141] libmachine: (multinode-806010) Calling .GetSSHUsername
	I0214 21:26:43.195604  276832 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/multinode-806010/id_rsa Username:docker}
	I0214 21:26:43.277631  276832 ssh_runner.go:195] Run: systemctl --version
	I0214 21:26:43.283349  276832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:26:43.297990  276832 kubeconfig.go:125] found "multinode-806010" server: "https://192.168.39.95:8443"
	I0214 21:26:43.298028  276832 api_server.go:166] Checking apiserver status ...
	I0214 21:26:43.298062  276832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 21:26:43.313606  276832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup
	W0214 21:26:43.322783  276832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0214 21:26:43.322816  276832 ssh_runner.go:195] Run: ls
	I0214 21:26:43.326826  276832 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8443/healthz ...
	I0214 21:26:43.331281  276832 api_server.go:279] https://192.168.39.95:8443/healthz returned 200:
	ok
	I0214 21:26:43.331304  276832 status.go:463] multinode-806010 apiserver status = Running (err=<nil>)
	I0214 21:26:43.331317  276832 status.go:176] multinode-806010 status: &{Name:multinode-806010 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:26:43.331348  276832 status.go:174] checking status of multinode-806010-m02 ...
	I0214 21:26:43.331635  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.331676  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.346970  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0214 21:26:43.347465  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.347955  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.347979  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.348352  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.348551  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetState
	I0214 21:26:43.350233  276832 status.go:371] multinode-806010-m02 host status = "Running" (err=<nil>)
	I0214 21:26:43.350247  276832 host.go:66] Checking if "multinode-806010-m02" exists ...
	I0214 21:26:43.350507  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.350536  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.364610  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43949
	I0214 21:26:43.364990  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.365441  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.365458  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.365726  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.365891  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetIP
	I0214 21:26:43.368628  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | domain multinode-806010-m02 has defined MAC address 52:54:00:37:4e:bf in network mk-multinode-806010
	I0214 21:26:43.368968  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:4e:bf", ip: ""} in network mk-multinode-806010: {Iface:virbr1 ExpiryTime:2025-02-14 22:25:03 +0000 UTC Type:0 Mac:52:54:00:37:4e:bf Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:multinode-806010-m02 Clientid:01:52:54:00:37:4e:bf}
	I0214 21:26:43.369004  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | domain multinode-806010-m02 has defined IP address 192.168.39.252 and MAC address 52:54:00:37:4e:bf in network mk-multinode-806010
	I0214 21:26:43.369131  276832 host.go:66] Checking if "multinode-806010-m02" exists ...
	I0214 21:26:43.369395  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.369425  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.383230  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37835
	I0214 21:26:43.383655  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.384116  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.384130  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.384474  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.384671  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .DriverName
	I0214 21:26:43.384864  276832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 21:26:43.384884  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetSSHHostname
	I0214 21:26:43.387197  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | domain multinode-806010-m02 has defined MAC address 52:54:00:37:4e:bf in network mk-multinode-806010
	I0214 21:26:43.387599  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:4e:bf", ip: ""} in network mk-multinode-806010: {Iface:virbr1 ExpiryTime:2025-02-14 22:25:03 +0000 UTC Type:0 Mac:52:54:00:37:4e:bf Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:multinode-806010-m02 Clientid:01:52:54:00:37:4e:bf}
	I0214 21:26:43.387629  276832 main.go:141] libmachine: (multinode-806010-m02) DBG | domain multinode-806010-m02 has defined IP address 192.168.39.252 and MAC address 52:54:00:37:4e:bf in network mk-multinode-806010
	I0214 21:26:43.387813  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetSSHPort
	I0214 21:26:43.387990  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetSSHKeyPath
	I0214 21:26:43.388133  276832 main.go:141] libmachine: (multinode-806010-m02) Calling .GetSSHUsername
	I0214 21:26:43.388248  276832 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20315-243456/.minikube/machines/multinode-806010-m02/id_rsa Username:docker}
	I0214 21:26:43.469592  276832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 21:26:43.483320  276832 status.go:176] multinode-806010-m02 status: &{Name:multinode-806010-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:26:43.483347  276832 status.go:174] checking status of multinode-806010-m03 ...
	I0214 21:26:43.483599  276832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:26:43.483628  276832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:26:43.497904  276832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36325
	I0214 21:26:43.498209  276832 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:26:43.498598  276832 main.go:141] libmachine: Using API Version  1
	I0214 21:26:43.498643  276832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:26:43.498916  276832 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:26:43.499102  276832 main.go:141] libmachine: (multinode-806010-m03) Calling .GetState
	I0214 21:26:43.500386  276832 status.go:371] multinode-806010-m03 host status = "Stopped" (err=<nil>)
	I0214 21:26:43.500399  276832 status.go:384] host is not running, skipping remaining checks
	I0214 21:26:43.500404  276832 status.go:176] multinode-806010-m03 status: &{Name:multinode-806010-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-806010 node start m03 -v=5 --alsologtostderr: (34.968946655s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (314.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-806010
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-806010
E0214 21:30:13.381840  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-806010: (3m2.733439133s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-806010 --wait=true -v=5 --alsologtostderr
E0214 21:31:42.290297  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-806010 --wait=true -v=5 --alsologtostderr: (2m11.710797729s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-806010
--- PASS: TestMultiNode/serial/RestartKeepsNodes (314.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-806010 node delete m03: (1.992114676s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 stop
E0214 21:33:16.446803  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:35:13.381538  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-806010 stop: (3m1.59470378s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-806010 status: exit status 7 (89.956731ms)

                                                
                                                
-- stdout --
	multinode-806010
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-806010-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr: exit status 7 (92.563343ms)

                                                
                                                
-- stdout --
	multinode-806010
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-806010-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:35:37.848039  279615 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:35:37.848166  279615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:35:37.848178  279615 out.go:358] Setting ErrFile to fd 2...
	I0214 21:35:37.848184  279615 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:35:37.848367  279615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:35:37.848537  279615 out.go:352] Setting JSON to false
	I0214 21:35:37.848569  279615 mustload.go:65] Loading cluster: multinode-806010
	I0214 21:35:37.848689  279615 notify.go:220] Checking for updates...
	I0214 21:35:37.849017  279615 config.go:182] Loaded profile config "multinode-806010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:35:37.849041  279615 status.go:174] checking status of multinode-806010 ...
	I0214 21:35:37.849449  279615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:35:37.849491  279615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:35:37.874413  279615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41239
	I0214 21:35:37.874755  279615 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:35:37.875310  279615 main.go:141] libmachine: Using API Version  1
	I0214 21:35:37.875346  279615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:35:37.875664  279615 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:35:37.875851  279615 main.go:141] libmachine: (multinode-806010) Calling .GetState
	I0214 21:35:37.877369  279615 status.go:371] multinode-806010 host status = "Stopped" (err=<nil>)
	I0214 21:35:37.877384  279615 status.go:384] host is not running, skipping remaining checks
	I0214 21:35:37.877392  279615 status.go:176] multinode-806010 status: &{Name:multinode-806010 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 21:35:37.877418  279615 status.go:174] checking status of multinode-806010-m02 ...
	I0214 21:35:37.877694  279615 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0214 21:35:37.877729  279615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0214 21:35:37.891710  279615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0214 21:35:37.892043  279615 main.go:141] libmachine: () Calling .GetVersion
	I0214 21:35:37.892459  279615 main.go:141] libmachine: Using API Version  1
	I0214 21:35:37.892478  279615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0214 21:35:37.892759  279615 main.go:141] libmachine: () Calling .GetMachineName
	I0214 21:35:37.892950  279615 main.go:141] libmachine: (multinode-806010-m02) Calling .GetState
	I0214 21:35:37.894257  279615 status.go:371] multinode-806010-m02 host status = "Stopped" (err=<nil>)
	I0214 21:35:37.894269  279615 status.go:384] host is not running, skipping remaining checks
	I0214 21:35:37.894274  279615 status.go:176] multinode-806010-m02 status: &{Name:multinode-806010-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-806010 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0214 21:36:42.290841  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-806010 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.375470741s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-806010 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-806010
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-806010-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-806010-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.805171ms)

                                                
                                                
-- stdout --
	* [multinode-806010-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-806010-m02' is duplicated with machine name 'multinode-806010-m02' in profile 'multinode-806010'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-806010-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-806010-m03 --driver=kvm2  --container-runtime=crio: (42.528384151s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-806010
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-806010: exit status 80 (210.16326ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-806010 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-806010-m03 already exists in multinode-806010-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-806010-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.50s)

                                                
                                    
x
+
TestScheduledStopUnix (110.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-412691 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-412691 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.134415085s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412691 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-412691 -n scheduled-stop-412691
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412691 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0214 21:43:18.519033  250783 retry.go:31] will retry after 75.2µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.520195  250783 retry.go:31] will retry after 77.64µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.521297  250783 retry.go:31] will retry after 286.527µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.522428  250783 retry.go:31] will retry after 272.47µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.523570  250783 retry.go:31] will retry after 686.238µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.524697  250783 retry.go:31] will retry after 384.457µs: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.525830  250783 retry.go:31] will retry after 1.395366ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.528032  250783 retry.go:31] will retry after 2.144566ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.531286  250783 retry.go:31] will retry after 2.119964ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.534492  250783 retry.go:31] will retry after 4.460575ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.539716  250783 retry.go:31] will retry after 5.737099ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.545950  250783 retry.go:31] will retry after 6.830823ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.553321  250783 retry.go:31] will retry after 17.285667ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.571562  250783 retry.go:31] will retry after 27.167551ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
I0214 21:43:18.599789  250783 retry.go:31] will retry after 16.314143ms: open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/scheduled-stop-412691/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412691 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412691 -n scheduled-stop-412691
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-412691
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412691 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-412691
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-412691: exit status 7 (76.956761ms)

                                                
                                                
-- stdout --
	scheduled-stop-412691
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412691 -n scheduled-stop-412691
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412691 -n scheduled-stop-412691: exit status 7 (66.063886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-412691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-412691
--- PASS: TestScheduledStopUnix (110.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (219.69s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1444315219 start -p running-upgrade-318439 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0214 21:45:13.382069  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1444315219 start -p running-upgrade-318439 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.720937757s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-318439 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0214 21:46:42.290838  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-318439 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.292339258s)
helpers_test.go:175: Cleaning up "running-upgrade-318439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-318439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-318439: (1.094401893s)
--- PASS: TestRunningBinaryUpgrade (219.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (170.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1223056792 start -p stopped-upgrade-250953 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1223056792 start -p stopped-upgrade-250953 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.660505557s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1223056792 -p stopped-upgrade-250953 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1223056792 -p stopped-upgrade-250953 stop: (2.144351007s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-250953 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-250953 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.313928686s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (170.12s)

                                                
                                    
x
+
TestPause/serial/Start (128.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-865564 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-865564 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m8.477672426s)
--- PASS: TestPause/serial/Start (128.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-250953
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (63.554092ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-201553] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-201553 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-201553 --driver=kvm2  --container-runtime=crio: (45.460923896s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-201553 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.961792511s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-201553 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-201553 status -o json: exit status 2 (247.061404ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-201553","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-201553
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-266997 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-266997 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.455827ms)

                                                
                                                
-- stdout --
	* [false-266997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20315
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 21:48:12.605390  286567 out.go:345] Setting OutFile to fd 1 ...
	I0214 21:48:12.605643  286567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:12.605653  286567 out.go:358] Setting ErrFile to fd 2...
	I0214 21:48:12.605660  286567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0214 21:48:12.605854  286567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20315-243456/.minikube/bin
	I0214 21:48:12.606416  286567 out.go:352] Setting JSON to false
	I0214 21:48:12.607341  286567 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":9037,"bootTime":1739560656,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0214 21:48:12.607439  286567 start.go:140] virtualization: kvm guest
	I0214 21:48:12.609388  286567 out.go:177] * [false-266997] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0214 21:48:12.610616  286567 out.go:177]   - MINIKUBE_LOCATION=20315
	I0214 21:48:12.610616  286567 notify.go:220] Checking for updates...
	I0214 21:48:12.612856  286567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 21:48:12.614022  286567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20315-243456/kubeconfig
	I0214 21:48:12.615112  286567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20315-243456/.minikube
	I0214 21:48:12.616156  286567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0214 21:48:12.617235  286567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 21:48:12.618835  286567 config.go:182] Loaded profile config "NoKubernetes-201553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0214 21:48:12.618982  286567 config.go:182] Loaded profile config "kubernetes-upgrade-041692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0214 21:48:12.619148  286567 config.go:182] Loaded profile config "pause-865564": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0214 21:48:12.619258  286567 driver.go:394] Setting default libvirt URI to qemu:///system
	I0214 21:48:12.654770  286567 out.go:177] * Using the kvm2 driver based on user configuration
	I0214 21:48:12.655698  286567 start.go:304] selected driver: kvm2
	I0214 21:48:12.655710  286567 start.go:908] validating driver "kvm2" against <nil>
	I0214 21:48:12.655721  286567 start.go:919] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 21:48:12.657643  286567 out.go:201] 
	W0214 21:48:12.658621  286567 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0214 21:48:12.659595  286567 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-266997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.250:8443
name: NoKubernetes-201553
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.173:8443
name: pause-865564
contexts:
- context:
cluster: NoKubernetes-201553
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-201553
name: NoKubernetes-201553
- context:
cluster: pause-865564
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-865564
name: pause-865564
current-context: pause-865564
kind: Config
preferences: {}
users:
- name: NoKubernetes-201553
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.key
- name: pause-865564
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-266997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-266997"

                                                
                                                
----------------------- debugLogs end: false-266997 [took: 2.740574827s] --------------------------------
helpers_test.go:175: Cleaning up "false-266997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-266997
--- PASS: TestNetworkPlugins/group/false (3.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-201553 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.758581925s)
--- PASS: TestNoKubernetes/serial/Start (23.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-201553 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-201553 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.946548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-201553
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-201553: (1.287041177s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (65.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-201553 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-201553 --driver=kvm2  --container-runtime=crio: (1m5.36831451s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (65.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-201553 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-201553 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.019667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (108.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-926549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-926549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m48.898557762s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (108.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-815168 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-815168 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m34.920403417s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-926549 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c07ccc81-014b-49f8-adc4-a0186c36d0e8] Pending
helpers_test.go:344: "busybox" [c07ccc81-014b-49f8-adc4-a0186c36d0e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c07ccc81-014b-49f8-adc4-a0186c36d0e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00293097s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-926549 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-926549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-926549 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-926549 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-926549 --alsologtostderr -v=3: (1m30.697904971s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-815168 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88f9b37d-eb9a-4b17-aa50-4c6eff977a3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [88f9b37d-eb9a-4b17-aa50-4c6eff977a3c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004640587s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-815168 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-815168 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-815168 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-815168 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-815168 --alsologtostderr -v=3: (1m30.879356262s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-728361 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-728361 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m0.097839868s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-926549 -n no-preload-926549
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-926549 -n no-preload-926549: exit status 7 (67.855496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-926549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-926549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-926549 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (57.992890947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-926549 -n no-preload-926549
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-815168 -n embed-certs-815168
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-815168 -n embed-certs-815168: exit status 7 (74.731817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-815168 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-815168 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-815168 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (50.593161605s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-815168 -n embed-certs-815168
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-728361 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c041cffd-6a4d-48d4-bc76-e1c32f2168a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c041cffd-6a4d-48d4-bc76-e1c32f2168a0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004238336s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-728361 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-728361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-728361 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.007913308s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-728361 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-728361 --alsologtostderr -v=3
E0214 21:55:13.381839  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-728361 --alsologtostderr -v=3: (1m30.946725002s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jxtvc" [0055f6e1-a4f5-4aae-865b-8a2aa73374ac] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jxtvc" [0055f6e1-a4f5-4aae-865b-8a2aa73374ac] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.003411712s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b78mp" [c2d05439-77d9-49e5-aee6-2e5287a58483] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002730146s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jxtvc" [0055f6e1-a4f5-4aae-865b-8a2aa73374ac] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003302394s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-926549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-b78mp" [c2d05439-77d9-49e5-aee6-2e5287a58483] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003895846s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-815168 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-926549 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-926549 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-926549 -n no-preload-926549
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-926549 -n no-preload-926549: exit status 2 (236.522042ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-926549 -n no-preload-926549
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-926549 -n no-preload-926549: exit status 2 (250.933486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-926549 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-926549 -n no-preload-926549
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-926549 -n no-preload-926549
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-815168 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-815168 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-815168 -n embed-certs-815168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-815168 -n embed-certs-815168: exit status 2 (242.29296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-815168 -n embed-certs-815168
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-815168 -n embed-certs-815168: exit status 2 (262.715342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-815168 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-815168 -n embed-certs-815168
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-815168 -n embed-certs-815168
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (48.942897326s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0214 21:56:42.290765  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m53.579177322s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361: exit status 7 (84.869604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-728361 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-728361 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-728361 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m6.100338608s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.104995713s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-268017 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-268017 --alsologtostderr -v=3: (8.395728016s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268017 -n newest-cni-268017
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268017 -n newest-cni-268017: exit status 7 (96.007566ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-268017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (50.947371188s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268017 -n newest-cni-268017
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-201745 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-201745 --alsologtostderr -v=3: (4.323794228s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-201745 -n old-k8s-version-201745: exit status 7 (91.501048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-201745 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wt2km" [460bd44a-95f3-4717-aeec-391933b67754] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wt2km" [460bd44a-95f3-4717-aeec-391933b67754] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.007671468s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-268017 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-268017 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268017 -n newest-cni-268017
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268017 -n newest-cni-268017: exit status 2 (235.181103ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268017 -n newest-cni-268017
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268017 -n newest-cni-268017: exit status 2 (238.000043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-268017 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268017 -n newest-cni-268017
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268017 -n newest-cni-268017
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.551178083s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-266997 "pgrep -a kubelet"
I0214 21:57:59.920675  250783 config.go:182] Loaded profile config "auto-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-pr7sv" [599a760f-3d71-4d88-a9f8-9529c29d8290] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-pr7sv" [599a760f-3d71-4d88-a9f8-9529c29d8290] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003278991s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wt2km" [460bd44a-95f3-4717-aeec-391933b67754] Running
E0214 21:58:04.860904  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:04.867462  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:04.878903  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:04.900334  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:04.941915  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:05.023541  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:05.185116  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:05.372023  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:05.506690  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 21:58:06.148809  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004002987s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-728361 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-728361 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-728361 --alsologtostderr -v=1
E0214 21:58:07.430968  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-728361 --alsologtostderr -v=1: (1.532274371s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361: exit status 2 (269.522837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361: exit status 2 (255.529654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-728361 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
E0214 21:58:09.992321  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-728361 -n default-k8s-diff-port-728361
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m29.091920278s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0214 21:58:45.837939  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m41.756710469s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r2jzl" [45cc5ef3-c3cb-4e51-a874-dc006c43cbf0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004502966s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-266997 "pgrep -a kubelet"
I0214 21:59:12.610206  250783 config.go:182] Loaded profile config "kindnet-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cbc8f" [e2d43e8e-3ebe-4990-8f2e-0e0bd07dc731] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cbc8f" [e2d43e8e-3ebe-4990-8f2e-0e0bd07dc731] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004020492s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (57.778108951s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fgqvw" [fdc13fd6-5710-4324-8e86-df3454e519ae] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005789551s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-266997 "pgrep -a kubelet"
I0214 21:59:46.897832  250783 config.go:182] Loaded profile config "calico-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-cdwt8" [e85e4793-fdde-4076-9a05-1875bd49f2fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-cdwt8" [e85e4793-fdde-4076-9a05-1875bd49f2fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003938948s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-266997 "pgrep -a kubelet"
I0214 22:00:09.376370  250783 config.go:182] Loaded profile config "custom-flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-f9kcr" [8965d562-3d82-4317-a707-459596c899c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 22:00:11.574348  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:00:13.382310  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-f9kcr" [8965d562-3d82-4317-a707-459596c899c4] Running
E0214 22:00:21.816589  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004223775s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m6.780999077s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-266997 "pgrep -a kubelet"
I0214 22:00:38.369376  250783 config.go:182] Loaded profile config "enable-default-cni-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sfk22" [b79a49db-10ad-47ec-a35c-250c52fad17d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sfk22" [b79a49db-10ad-47ec-a35c-250c52fad17d] Running
E0214 22:00:48.720979  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.369944234s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0214 22:00:42.298762  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-266997 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (59.907845857s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-89jwv" [ad852134-1305-456f-8204-2622daf9cf25] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004451779s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-266997 "pgrep -a kubelet"
I0214 22:01:29.491075  250783 config.go:182] Loaded profile config "flannel-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-j4rnv" [796bb38a-f9dd-4799-bceb-71391fd96881] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-j4rnv" [796bb38a-f9dd-4799-bceb-71391fd96881] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005024823s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-266997 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I0214 22:01:40.091948  250783 config.go:182] Loaded profile config "bridge-266997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-266997 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-n52s4" [239961b9-f6ee-4956-9d9c-624a87d65e09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 22:01:42.290649  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/addons-371781/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-n52s4" [239961b9-f6ee-4956-9d9c-624a87d65e09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003971175s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-266997 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-266997 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0214 22:02:45.182900  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.199652  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.206028  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.217369  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.238805  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.280212  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.361843  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.523435  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:00.845345  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:01.487052  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:02.768966  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:04.861028  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:05.330922  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:10.452828  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:20.694259  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:32.562854  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/no-preload-926549/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:03:41.175791  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.394767  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.401201  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.412533  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.433921  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.475205  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.556632  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:06.718167  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:07.039955  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:07.681994  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:08.963798  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:11.525409  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:16.646857  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:22.137999  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:26.889005  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.648073  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.654575  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.666022  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.687431  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.728805  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.810268  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:40.971851  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:41.293575  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:41.935827  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:43.217305  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:45.778791  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:47.370986  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:04:50.900349  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:01.142545  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:01.321234  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.666029  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.672376  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.683771  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.705141  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.746465  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.827990  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:09.989588  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:10.311490  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:10.953696  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:12.235459  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:13.381356  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/functional-471578/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:14.798110  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:19.920205  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:21.624058  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:28.332905  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/kindnet-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:29.025254  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/default-k8s-diff-port-728361/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:30.161922  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.598506  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.604891  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.616242  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.637692  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.679204  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.760732  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:38.922270  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:39.244108  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:39.885441  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:41.167601  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:43.729077  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:44.059869  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/auto-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:48.850559  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:50.643232  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/custom-flannel-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:05:59.092195  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/enable-default-cni-266997/client.crt: no such file or directory" logger="UnhandledError"
E0214 22:06:02.585777  250783 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/calico-266997/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test skip (40/321)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.14
272 TestNetworkPlugins/group/kubenet 3.09
280 TestNetworkPlugins/group/cilium 3.38
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-371781 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-863894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-863894
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-266997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.250:8443
name: NoKubernetes-201553
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.173:8443
name: pause-865564
contexts:
- context:
cluster: NoKubernetes-201553
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-201553
name: NoKubernetes-201553
- context:
cluster: pause-865564
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-865564
name: pause-865564
current-context: pause-865564
kind: Config
preferences: {}
users:
- name: NoKubernetes-201553
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.key
- name: pause-865564
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-266997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-266997"

                                                
                                                
----------------------- debugLogs end: kubenet-266997 [took: 2.941428086s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-266997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-266997
--- SKIP: TestNetworkPlugins/group/kubenet (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-266997 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-266997" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.250:8443
name: NoKubernetes-201553
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20315-243456/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.173:8443
name: pause-865564
contexts:
- context:
cluster: NoKubernetes-201553
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-201553
name: NoKubernetes-201553
- context:
cluster: pause-865564
extensions:
- extension:
last-update: Fri, 14 Feb 2025 21:48:07 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-865564
name: pause-865564
current-context: pause-865564
kind: Config
preferences: {}
users:
- name: NoKubernetes-201553
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/NoKubernetes-201553/client.key
- name: pause-865564
user:
client-certificate: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.crt
client-key: /home/jenkins/minikube-integration/20315-243456/.minikube/profiles/pause-865564/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-266997

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-266997" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-266997"

                                                
                                                
----------------------- debugLogs end: cilium-266997 [took: 3.236244807s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-266997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-266997
--- SKIP: TestNetworkPlugins/group/cilium (3.38s)

                                                
                                    
Copied to clipboard